Unraveling Automated Decision-Making: Schufa’s Impact and Implications

On December 7 2023, the Court of Justice of the European Union (CJEU) delivered its judgment in the Schufa case, involving Schufa AG, Germany’s leading credit rating agency, holding data on nearly 70 million individuals.

Schufa provides credit scores that are relied upon by financial service providers, retailers, telecom companies, and utility firms. In a recent case, a German resident had their loan application rejected by a bank based on a credit score assigned by Schufa.

The individual contested this decision, seeking information about Schufa’s automated decision-making processes under Article 15(1)(h)  GDPR, which grants the right of access to such information.

Schufa argued that it was not responsible for the decision itself, asserting its role was limited to producing an automated score, leaving the actual decision to the third-party bank.

However, the court disagreed with Schufa’s stance. It held that the creation of the credit score is a relevant automated decision under Article 22 GDPR, challenging the belief that only the ultimate decision-maker, i.e. the bank, engages in automated decision-making.

The court rejected Schufa’s argument; It held that the creation of the credit score itself constitutes a relevant automated decision under Article 22 of the GDPR. In its judgment, the court considered the score’s “determining role” in the credit decision, adopting a broad interpretation of the term ‘decision.’

Companies employing algorithms for risk scores or similar outputs, such as identity verification and fraud detection, may be concerned about the potential impact of this judgment. Many businesses assume customers bear regulatory risks associated with decisions based on their outputs. However, careful consideration is necessary to distinguish business models from those in the Schufa case.

For example, companies should assess the extent to which customers rely on the provided output when making decisions. If the output is one of many factors considered, and especially if it holds moderate significance, exceptions to Article 22 GDPR (explicit consent or contractual necessity) should be explored.

Companies must further evaluate if the ultimate decision has a legal or comparatively significant effect. In cases where the decision’s impact is limited, exceptions under Article 22 GDPR may apply.

Schufa judgment coincides with the conclusion of the trilogue process around the EU AI Act, making it especially relevant for businesses developing AI-enabled solutions in high-risk areas, like credit decisions. The ruling is poised to influence practices in the evolving landscape of automated decision-making within 2024, as this remains an uncharted aread for the national and EU legislator.

 

European Parliament Advances Artificial Intelligence Act

In a significant development last week, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act. With a strong majority of 499 votes in favor, 28 against, and 93 abstentions, the Parliament has set the stage for discussions with EU member states to finalize the regulatory framework governing AI.

The proposed regulations aim to ensure that AI technologies developed and used within Europe align with EU rights and values, encompassing vital aspects such as human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

The forthcoming rules adopt a risk-based approach and impose obligations on both AI providers and deployers based on the potential risks associated with the AI systems. More specifically, the legislation identifies specific AI practices that will be prohibited due to their unacceptable risks. These include social scoring, which involves categorizing individuals based on their social behavior or personal characteristics.

Moreover, MEPs expanded the list to incorporate bans on intrusive and discriminatory applications of AI, such as real-time remote biometric identification in public spaces and emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.

Recognizing the need for enhanced precautions, the Parliament also emphasized the classification of high-risk AI applications. This category will now encompass AI systems that pose significant harm to people’s health, safety, fundamental rights, or the environment. Additionally, AI systems employed for voter influence, election outcomes, and recommender systems used by social media platforms with over 45 million users will be subject to the high-risk classification.

Furthermore, to ensure responsible use and accountability, providers of foundation models, a rapidly evolving area within AI, will be required to assess and mitigate potential risks related to health, safety, fundamental rights, the environment, democracy, and the rule of law. Before releasing their models in the EU market, these providers must register their models in the EU database. Generative AI systems based on such models, including ChatGPT, will need to comply with transparency requirements, disclose AI-generated content, and implement safeguards against generating illegal content. Additionally, detailed summaries of copyrighted data used for training purposes will need to be made publicly available.

Recognizing the importance of fostering AI innovation while safeguarding citizens’ rights, MEPs have also introduced exemptions for research activities and AI components provided under open-source licenses. Moreover, the legislation encourages the establishment of regulatory sandboxes, which are real-life environments created by public authorities to test AI technologies before their deployment.

The new regulations aim to empower citizens by granting them the right to file complaints regarding AI systems. Furthermore, individuals will have the right to receive explanations about decisions made by high-risk AI systems that significantly impact their fundamental rights. The role of the EU AI Office will also undergo reforms, equipping it with the responsibility to monitor the implementation of the AI rulebook.

In conclusion, the proposed regulations set clear boundaries for prohibited AI practices and establish obligations for high-risk AI applications. Moreover, they strike a balance by supporting innovation through exemptions and regulatory sandboxes while prioritizing citizen rights and accountability. As discussions continue with EU member states, the Parliament’s focus on protecting rights and enhancing AI’s regulatory framework paves the way for a future, where AI technologies align with EU values and contribute leaving a positive footprint on society.

Trademarks, Public Policy and Principles of Morality

MUNIA is a registered European Union Trademark (EUTM: 016305369) owned by Bodega ViñaGuareña, a Spanish winemaker producing high quality wines near Salamanca. As of today, their trademark enjoys protection across 27 member-states of the European Union, although their wines have not reached Greek retail stores.

But can a sign with an objectionable meaning be registered as a European Union Trademark (EUTM) ?

Not always. Article 7(1)(f) of Regulation 2017/1001 (EU) (EUTMR) provides for the refusal of trademark applications and the invalidation of registrations already effected, where trademarks are “contrary to public policy or to accepted principles of morality”.

The same provision as in the EUTMR is reflected in Article 4(1)(f) of the Trade Mark Directive, which has been transposed verbatim in Greek law, by means of Article 4 of Law 4679/2020.

The wording of the above refusal ground is very broad and could create legal tensions, as the EU trademark system is unitary in character, whereas both moral principles and the requirements of public policy may vary from country to country and evolve over time.

As a result, an objection against an EU trademark application in any member state can defeat the entire application, as under Article 7(2) ETMR an application can be rejected even if the grounds for refusal exist only in part of the European Union.

For the sake of uniformity, EUIPO Board of Appeals published a Case-law Research Report in October 2021 that establishes general principles on the assessment of applications.

Some of the most notable examples of the assessed signs over the last years, as summarised in the above report, are the following:

In SULA, (vulgar for ‘penis’ in Romanian), the Board rejected an application confirming that the goods (milk and derivates) applied for did not avoid, but in certain cases even enhanced, the link with such a sexual connotation.

Similarly, in Kona, which is a subcompact crossover SUV produced by the South Korean manufacturer Hyundai. Differently spelt, the word ‘cona’ is vulgar for ‘vagina’ in Portuguese, and in this respect the Board considered the sign to be an offensive vulgar expression for the Portuguese public, notwithstanding that the goods applied for were ‘automobiles’.

By contrast, in REVA The electriCity Car, EUIPO had found back in 2006 that in the context of electric cars and in combination with the English words ‘The ElectriCity Car’, the Finnish public will not consider the expression ‘reva’ (vulgar for ‘vagina’ in Finnish) to be intentionally abusive, but as an unfortunate choice of brand of foreign origin. EUIPO held in that case that “from time to time, the general public encounters words on imported goods and services which, if used conversationally in its own language, might be found shocking. Nevertheless, they are understood for what they are, namely as neutral foreign words carrying an unfortunate meaning in the native tongue.

The Board also allowed the registration of the trademark CUR (Romanian slang for ‘butt’) stating that the mark would not be found to be offensive in relation to IT-related specialised services, but rather as “a slightly embarrassing or even humorous example of how English- speaking undertakings can occasionally commit a linguistic ‘faux pas’ when selling their branded products globally. Moreover, the fact that the word did not address anybody in particular was also considered decisive in their assessment.

That was not the case in PAKI, however, where the General Court confirmed the Board’s assessment that, given the racist and degrading  meaning of the word for people originating from Pakistan and residing in the United Kingdom, the sign had to be refused registration irrespective of the goods and services applied for.

But would the above rejections constitute a violation of the respective applicant’s freedom of expression, enshrined in Article 10 ECHR and Article 11 of the Charter of Fundamental Rights of the European Union, or even a violation of their freedom to conduct a business pursuant to Article 16 of the Charter of Fundamental Rights of the European Union?

Under settled case-law, the refusal of a trademark application does not limit the applicant’s freedom of expression. In fact, the General Court has pointed out that it is not necessary to register a sign for it to be used for commercial purposes and that the goal of Article 7(1)(f) EUTMR is not to filter out signs whose use in commerce must at all cost be prevented.

In that sense, when EUIPO declared the trademark “BOY LONDON” invalid on the grounds that it evoked Nazi symbolism and was, therefore, contrary to the accepted principles of morality, it reiterated that the application of Article 7(1)(f) EUTMR is not a constraint to anybody’s freedom of expression, because the applicant is not prevented from using the sign but is simply refused its registration.