Unraveling Automated Decision-Making: Schufa’s Impact and Implications

On December 7 2023, the Court of Justice of the European Union (CJEU) delivered its judgment in the Schufa case, involving Schufa AG, Germany’s leading credit rating agency, holding data on nearly 70 million individuals.

Schufa provides credit scores that are relied upon by financial service providers, retailers, telecom companies, and utility firms. In a recent case, a German resident had their loan application rejected by a bank based on a credit score assigned by Schufa.

The individual contested this decision, seeking information about Schufa’s automated decision-making processes under Article 15(1)(h)  GDPR, which grants the right of access to such information.

Schufa argued that it was not responsible for the decision itself, asserting its role was limited to producing an automated score, leaving the actual decision to the third-party bank.

However, the court disagreed with Schufa’s stance. It held that the creation of the credit score is a relevant automated decision under Article 22 GDPR, challenging the belief that only the ultimate decision-maker, i.e. the bank, engages in automated decision-making.

The court rejected Schufa’s argument; It held that the creation of the credit score itself constitutes a relevant automated decision under Article 22 of the GDPR. In its judgment, the court considered the score’s “determining role” in the credit decision, adopting a broad interpretation of the term ‘decision.’

Companies employing algorithms for risk scores or similar outputs, such as identity verification and fraud detection, may be concerned about the potential impact of this judgment. Many businesses assume customers bear regulatory risks associated with decisions based on their outputs. However, careful consideration is necessary to distinguish business models from those in the Schufa case.

For example, companies should assess the extent to which customers rely on the provided output when making decisions. If the output is one of many factors considered, and especially if it holds moderate significance, exceptions to Article 22 GDPR (explicit consent or contractual necessity) should be explored.

Companies must further evaluate if the ultimate decision has a legal or comparatively significant effect. In cases where the decision’s impact is limited, exceptions under Article 22 GDPR may apply.

Schufa judgment coincides with the conclusion of the trilogue process around the EU AI Act, making it especially relevant for businesses developing AI-enabled solutions in high-risk areas, like credit decisions. The ruling is poised to influence practices in the evolving landscape of automated decision-making within 2024, as this remains an uncharted aread for the national and EU legislator.

 

Unlocking GDPR’s Synergy with AI: Insights from CNIL’s Guidance

The intersection of artificial intelligence (AI) and the General Data Protection Regulation (GDPR) has long been a subject of debate and concern. On one hand, AI presents remarkable advancements and transformative potential in various industries. On the other hand, GDPR places stringent demands on how personal data is collected, processed, and protected.

The question that arose early on is whether AI innovation and GDPR compliance may coexist harmoniously. In response to these complexities, the French data protection authority, CNIL, took a significant step by releasing official guidance that addresses the intricate relationship between artificial intelligence (AI) development and General Data Protection Regulation (GDPR) compliance. This guidance is a response to concerns raised by AI stakeholders during a call for contributions initiated on 28 July 2023.

CNIL’s primary aim is to reassure the industry by releasing a set of guidelines that emphasize the compatibility of AI system development with privacy considerations. In their own words, “[t]he development of AI systems is compatible with the challenges of privacy protection. Moreover, considering this imperative will lead to the emergence of devices, tools, and applications that are ethical and aligned with European values. It is under these conditions that citizens will place their trust in these technologies”.

The guidance comprises seven “How-to? sheets” providing valuable insights into applying core GDPR principles during the development phase of AI systems. Here are some key takeaways:

– Purpose Limitation: AI systems using personal data must be developed and used for specific, legitimate purposes. This means careful consideration of the AI system’s purpose before collecting or using personal data and avoiding overly generic descriptions. In cases where the purpose cannot be precisely determined at the development stage, a clear description of the type of system and its main possible functionalities is required.

– Data Minimization: Only essential personal data for the AI system’s purpose should be collected and used. Avoid unnecessary data collection, and implement measures to purge unneeded personal data, even for large databases.

– Data Retention: Extended data retention for training databases is allowed when justified by the legitimate purpose of AI systems. This provides flexibility to data controllers.

– Data Reuse: Reuse of databases, including publicly available data, is permissible for AI training, provided the data was collected lawfully and the purpose of reuse aligns with the initial purpose of data collection.

Additionally, CNIL’s guidance covers various other topics, including purpose defining, data protection impact assessment (DPIA), controllership determination, legal basis choice, and privacy by design.

This guidance serves as a valuable resource for businesses and organizations involved in AI systems, not only in France but also in any jurisdiction under the GDPR. It emphasizes that AI development and privacy can coexist with robust governance and content oversight.

Given that CNIL has announced two more guidance sets, AI stakeholders should stay vigilant for forthcoming directives to address evolving challenges in the AI landscape, particularly regarding personal data minimization and retention.

Additionally, as the dynamic landscape of AI and GDPR compliance is navigated, insights from other national data protection authorities are eagerly awaited. The ongoing dialogue revolves around striking the right equilibrium between innovation and data protection—a balancing act that holds the potential to benefit both progress and individual liberties.

An Irish Blessing

May the road rise to meet you,
May the wind be always at your back.
May the sun shine warm upon your face,
The rains fall soft upon your fields.
And until we meet again,
May God hold you in the palm of his hand.

May God be with you and bless you:
May you see your children’s children.
May you be poor in misfortune,
rich in blessings.
May you know nothing but happiness
From this day forward.

May the road rise up to meet you.
May the wind be always at your back.
May the warm rays of sun fall upon your home,
And may the land of a friend always be near.

May green be the grass you walk on,
May blue be the skies above you,
May pure be the joys that surround you,
May true be the hearts that love you.

European Parliament Advances Artificial Intelligence Act

In a significant development last week, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act. With a strong majority of 499 votes in favor, 28 against, and 93 abstentions, the Parliament has set the stage for discussions with EU member states to finalize the regulatory framework governing AI.

The proposed regulations aim to ensure that AI technologies developed and used within Europe align with EU rights and values, encompassing vital aspects such as human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

The forthcoming rules adopt a risk-based approach and impose obligations on both AI providers and deployers based on the potential risks associated with the AI systems. More specifically, the legislation identifies specific AI practices that will be prohibited due to their unacceptable risks. These include social scoring, which involves categorizing individuals based on their social behavior or personal characteristics.

Moreover, MEPs expanded the list to incorporate bans on intrusive and discriminatory applications of AI, such as real-time remote biometric identification in public spaces and emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.

Recognizing the need for enhanced precautions, the Parliament also emphasized the classification of high-risk AI applications. This category will now encompass AI systems that pose significant harm to people’s health, safety, fundamental rights, or the environment. Additionally, AI systems employed for voter influence, election outcomes, and recommender systems used by social media platforms with over 45 million users will be subject to the high-risk classification.

Furthermore, to ensure responsible use and accountability, providers of foundation models, a rapidly evolving area within AI, will be required to assess and mitigate potential risks related to health, safety, fundamental rights, the environment, democracy, and the rule of law. Before releasing their models in the EU market, these providers must register their models in the EU database. Generative AI systems based on such models, including ChatGPT, will need to comply with transparency requirements, disclose AI-generated content, and implement safeguards against generating illegal content. Additionally, detailed summaries of copyrighted data used for training purposes will need to be made publicly available.

Recognizing the importance of fostering AI innovation while safeguarding citizens’ rights, MEPs have also introduced exemptions for research activities and AI components provided under open-source licenses. Moreover, the legislation encourages the establishment of regulatory sandboxes, which are real-life environments created by public authorities to test AI technologies before their deployment.

The new regulations aim to empower citizens by granting them the right to file complaints regarding AI systems. Furthermore, individuals will have the right to receive explanations about decisions made by high-risk AI systems that significantly impact their fundamental rights. The role of the EU AI Office will also undergo reforms, equipping it with the responsibility to monitor the implementation of the AI rulebook.

In conclusion, the proposed regulations set clear boundaries for prohibited AI practices and establish obligations for high-risk AI applications. Moreover, they strike a balance by supporting innovation through exemptions and regulatory sandboxes while prioritizing citizen rights and accountability. As discussions continue with EU member states, the Parliament’s focus on protecting rights and enhancing AI’s regulatory framework paves the way for a future, where AI technologies align with EU values and contribute leaving a positive footprint on society.

Trademarks, Public Policy and Principles of Morality

MUNIA is a registered European Union Trademark (EUTM: 016305369) owned by Bodega ViñaGuareña, a Spanish winemaker producing high quality wines near Salamanca. As of today, their trademark enjoys protection across 27 member-states of the European Union, although their wines have not reached Greek retail stores.

But can a sign with an objectionable meaning be registered as a European Union Trademark (EUTM) ?

Not always. Article 7(1)(f) of Regulation 2017/1001 (EU) (EUTMR) provides for the refusal of trademark applications and the invalidation of registrations already effected, where trademarks are “contrary to public policy or to accepted principles of morality”.

The same provision as in the EUTMR is reflected in Article 4(1)(f) of the Trade Mark Directive, which has been transposed verbatim in Greek law, by means of Article 4 of Law 4679/2020.

The wording of the above refusal ground is very broad and could create legal tensions, as the EU trademark system is unitary in character, whereas both moral principles and the requirements of public policy may vary from country to country and evolve over time.

As a result, an objection against an EU trademark application in any member state can defeat the entire application, as under Article 7(2) ETMR an application can be rejected even if the grounds for refusal exist only in part of the European Union.

For the sake of uniformity, EUIPO Board of Appeals published a Case-law Research Report in October 2021 that establishes general principles on the assessment of applications.

Some of the most notable examples of the assessed signs over the last years, as summarised in the above report, are the following:

In SULA, (vulgar for ‘penis’ in Romanian), the Board rejected an application confirming that the goods (milk and derivates) applied for did not avoid, but in certain cases even enhanced, the link with such a sexual connotation.

Similarly, in Kona, which is a subcompact crossover SUV produced by the South Korean manufacturer Hyundai. Differently spelt, the word ‘cona’ is vulgar for ‘vagina’ in Portuguese, and in this respect the Board considered the sign to be an offensive vulgar expression for the Portuguese public, notwithstanding that the goods applied for were ‘automobiles’.

By contrast, in REVA The electriCity Car, EUIPO had found back in 2006 that in the context of electric cars and in combination with the English words ‘The ElectriCity Car’, the Finnish public will not consider the expression ‘reva’ (vulgar for ‘vagina’ in Finnish) to be intentionally abusive, but as an unfortunate choice of brand of foreign origin. EUIPO held in that case that “from time to time, the general public encounters words on imported goods and services which, if used conversationally in its own language, might be found shocking. Nevertheless, they are understood for what they are, namely as neutral foreign words carrying an unfortunate meaning in the native tongue.

The Board also allowed the registration of the trademark CUR (Romanian slang for ‘butt’) stating that the mark would not be found to be offensive in relation to IT-related specialised services, but rather as “a slightly embarrassing or even humorous example of how English- speaking undertakings can occasionally commit a linguistic ‘faux pas’ when selling their branded products globally. Moreover, the fact that the word did not address anybody in particular was also considered decisive in their assessment.

That was not the case in PAKI, however, where the General Court confirmed the Board’s assessment that, given the racist and degrading  meaning of the word for people originating from Pakistan and residing in the United Kingdom, the sign had to be refused registration irrespective of the goods and services applied for.

But would the above rejections constitute a violation of the respective applicant’s freedom of expression, enshrined in Article 10 ECHR and Article 11 of the Charter of Fundamental Rights of the European Union, or even a violation of their freedom to conduct a business pursuant to Article 16 of the Charter of Fundamental Rights of the European Union?

Under settled case-law, the refusal of a trademark application does not limit the applicant’s freedom of expression. In fact, the General Court has pointed out that it is not necessary to register a sign for it to be used for commercial purposes and that the goal of Article 7(1)(f) EUTMR is not to filter out signs whose use in commerce must at all cost be prevented.

In that sense, when EUIPO declared the trademark “BOY LONDON” invalid on the grounds that it evoked Nazi symbolism and was, therefore, contrary to the accepted principles of morality, it reiterated that the application of Article 7(1)(f) EUTMR is not a constraint to anybody’s freedom of expression, because the applicant is not prevented from using the sign but is simply refused its registration.

Quote of the Day

Actual happiness always looks pretty squalid in comparison with the overcompensations for misery. And, of course, stability isn’t nearly so spectacular as instability. And being contented has none of the glamour of a good fight against misfortune, none of the picturesqueness of a struggle with temptation, or a fatal overthrow by passion or doubt. Happiness is never grand.

Aldous Huxley, Brave New World

European Union Reins in Big Tech

Οn Tuesday, 5 July 2022, the European Parliament held the final vote on the new Digital Services Act (DSA) and Digital Markets Act (DMA), two bills that aim to address the societal and economic effects of the tech industry by setting clear standards for how they operate and provide services in the EU, in line with the EU’s fundamental rights and values.

What is illegal offline, should be illegal online

The Digital Services Act (DSA) sets clear obligations for digital service providers, such as social media or marketplaces, to tackle the spread of illegal content, online disinformation and other societal risks. These requirements are proportionate to the size and risks platforms pose to society.

The new obligations include:

    • New measures to counter illegal content online and obligations for platforms to react quickly, while respecting fundamental rights, including the freedom of expression and data protection;
    • Strengthened traceability and checks on traders in online marketplaces to ensure products and services are safe; including efforts to perform random checks on whether illegal content resurfaces;
    • Increased transparency and accountability of platforms, for example by providing clear information on content moderation or the use of algorithms for recommending content (so-called recommender systems); users will be able to challenge content moderation decisions;
    • Bans on misleading practices and certain types of targeted advertising, such as those targeting children and ads based on sensitive data. The so-called “dark patterns” and misleading practices aimed at manipulating users’ choices will also be prohibited.

Very large online platforms and search engines (with 45 million or more monthly users), which present the highest risk, will have to comply with stricter obligations, enforced by the Commission. These include preventing systemic risks (such as the dissemination of illegal content, adverse effects on fundamental rights, on electoral processes and on gender-based violence or mental health) and being subject to independent audits. These platforms will also have to provide users with the choice to not receive recommendations based on profiling. They will also have to facilitate access to their data and algorithms to authorities and vetted researchers.

A list of “do’s” and “don’ts” for Gatekeepers

The Digital Markets Act (DMA) sets obligations for large online platforms acting as “gatekeepers” (platforms whose dominant online position make them hard for consumers to avoid) on the digital market to ensure a fairer business environment and more services for consumers.

To prevent unfair business practices, those designated as gatekeepers will have to:

    • allow third parties to inter-operate with their own services, meaning that smaller platforms will be able to request that dominant messaging platforms enable their users to exchange messages, send voice messages or files across messaging apps. This will give users greater choice and avoid the so-called “lock-in” effect where they are restricted to one app or platform;
    • allow business users to access the data they generate in the gatekeeper’s platform, to promote their own offers and conclude contracts with their customers outside the gatekeeper’s platforms.

Gatekeepers can no longer:

    • Rank their own services or products more favourably (self-preferencing) than other third parties on their platforms;
    • Prevent users from easily un-installing any pre-loaded software or apps, or using third-party applications and app stores;
    • Process users’ personal data for targeted advertising, unless consent is explicitly granted.
Sanctions

To ensure that the new rules on the DMA are properly implemented and in line with the dynamic digital sector, the Commission can carry out market investigations. If a gatekeeper does not comply with the rules, the Commission can impose fines of up to 10% of its total worldwide turnover in the preceding financial year, or up to 20% in case of repeated non-compliance.

Next Steps

Once formally adopted by the Council in July (DMA) and September (DSA), both acts will be published in the EU Official Journal and enter into force twenty days after publication.

The DSA will be directly applicable across the EU and will apply fifteen months or from 1 January 2024 (whichever comes later) after the entry into force. As regards the obligations for very large online platforms and very large online search engines, the DSA will apply earlier – four months after they have been designated as such by the Commission.

The DMA will start to apply six months following its entry into force. The gatekeepers will have a maximum of six months after they have been designated to comply with the new obligations.

Source: European Parliament

I Stand with Ukraine

In a 1787 letter to William Stephens Smith, the son-in-law of John Adams, Thomas Jefferson used the phrase “tree of liberty”.

“I do not know whether it is to yourself or Mr. Adams I am to give my thanks for the copy of the new constitution. I beg leave through you to place them where due. It will be yet three weeks before I shall receive them from America. There are very good articles in it: and very bad. I do not know which preponderate. What we have lately read in the history of Holland, in the chapter on the Stadtholder, would have sufficed to set me against a Chief magistrate eligible for a long duration, if I had ever been disposed towards one: and what we have always read of the elections of Polish kings should have forever excluded the idea of one continuable for life. Wonderful is the effect of impudent and persevering lying. The British ministry have so long hired their gazetteers to repeat and model into every form lies about our being in anarchy, that the world has at length believed them, the English nation has believed them, the ministers themselves have come to believe them, and what is more wonderful, we have believed them ourselves. Yet where does this anarchy exist? Where did it ever exist, except in the single instance of Massachusets? And can history produce an instance of a rebellion so honourably conducted? I say nothing of it’s motives. They were founded in ignorance, not wickedness. God forbid we should ever be 20. years without such a rebellion. The people can not be all, and always, well informed. The part which is wrong will be discontented in proportion to the importance of the facts they misconceive. If they remain quiet under such misconceptions it is a lethargy, the forerunner of death to the public liberty. We have had 13. states independant 11. years. There has been one rebellion. That comes to one rebellion in a century and a half for each state. What country before ever existed a century and half without a rebellion? And what country can preserve it’s liberties if their rulers are not warned from time to time that their people preserve the spirit of resistance? Let them take arms. The remedy is to set them right as to facts, pardon and pacify them. What signify a few lives lost in a century or two? The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants. It is it’s natural manure. Our Convention has been too much impressed by the insurrection of Massachusets: and in the spur of the moment they are setting up a kite to keep the hen yard in order. I hope in god this article will be rectified before the new constitution is accepted.”

The tree of liberty must be refreshed from time to time with the blood of patriots and tyrants.