Blog

Not Just Watching the Dog

In Decision 21/2025, the Hellenic Data Protection Authority (HDPA) revisits a recurring misconception: that the General Data Protection Regulation (GDPR) does not apply to private households. The case involved a couple who operated a restaurant and lived on the same premises, which they monitored using a set of security cameras. A neighbouring property owner filed a complaint after discovering that at least one of the cameras recorded not only the couple’s own premises but also his adjoining land and a portion of a public street.

The HDPA reviewed the footage and found that the cameras included a rotating surveillance device with fields of view extending beyond the private domain. Despite the couple’s claim that one camera merely recorded their stable, the evidence suggested otherwise.

The Authority ruled that this type of surveillance no longer falls within the GDPR’s limited household exemption. Whenever monitoring captures public space or third-party property, it triggers full compliance obligations: lawful basis under Article 6, transparency under Article 12, data minimisation, and above all, respect for the rights of data subjects under Articles 15 et seq. GDPR.

In the operative part of the Decision, each of the two individuals was fined a total of €3.000, comprising €2.000 for infringing the principles of lawfulness, purpose limitation, and accountability under Article 5 GDPR, and €1.000 for failing to comply with the data subject’s right of access under Article 15 GDPR.

But what about domestic stuff? Although the facts of the case centred on a neighbour, the ruling serves as a strong reminder for private individuals, who use surveillance tools to monitor baby sitters, cleaners, gardeners, or other domestic workers at their household. Even in one’s home, recording another person, particularly in the context of a work relationship, is considered data processing.

This means that any surveillance carried out within a household must have a clearly documented legal basis, such as freely given consent or a legitimate interest that can be properly justified. The monitoring must be proportionate to its purpose, limited in scope, and objectively necessary. The person being monitored must be informed in a transparent way, and their rights, including access and objection, must be fully respected. Any recordings must be securely stored, with access strictly controlled.

Crucially, when private individuals monitor third parties with whom they are contractually related, they are considered data controllers under Article 4 par. 7 GDPR. Simply being a private household does not exempt one from compliance.

If a nanny can be dismissed for breaching trust, then the same standard should apply to employers, who secretly monitor them without a valid legal basis and without informing them, as the law requires.

AI Act, Easily Explained

On July 12, 2024, the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (EU AI Act) was published in the Official Journal of the European Union.

The AI Act is a legislative framework that seeks to establish clear guidelines and standards for the development, deployment, and use of AI systems across the European Union. The primary objectives of this regulation are to promote innovation, protect fundamental rights, and build trust in AI systems among users and stakeholders. By setting out stringent requirements and obligations, the AI Act aims to mitigate risks associated with AI technologies while fostering a conducive environment for technological advancement and ethical use.

Scope and Definitions

The AI Act applies to a broad range of stakeholders, including providers, users, and importers of AI systems within the EU, as well as entities outside the EU whose AI systems impact individuals within the Union. The regulation adopts an expansive definition of AI, encompassing a wide variety of technologies such as machine learning, neural networks, expert systems, and other algorithm-based solutions. This comprehensive approach ensures that the regulation remains relevant and effective in addressing the diverse applications of AI technologies.

Risk-Based Classification

One of the most significant aspects of the AI Act is its risk-based classification of AI systems. The regulation categorizes AI applications into four distinct risk levels: unacceptable risk, high risk, limited risk, and minimal risk.

AI systems that fall under the category of unacceptable risk are those that pose a threat to safety, livelihoods, or fundamental rights. Such applications are outright banned under the regulation. Examples of unacceptable risk AI include social scoring systems used by governments, which can lead to discriminatory practices and infringements on individual freedoms.

High-risk AI systems are those used in critical sectors such as healthcare, transportation, and finance. These systems are subject to stringent requirements to ensure their safety, reliability, and ethical use. Providers of high-risk AI systems must implement robust risk management frameworks, use high-quality datasets to ensure accuracy and fairness, and maintain comprehensive documentation to demonstrate compliance.

Limited risk AI systems are those that present lower risks but still require certain transparency obligations. For instance, chatbots must disclose their non-human nature to users. This ensures that users are aware they are interacting with an AI system and can make informed decisions based on this knowledge.

Minimal risk AI systems, such as spam filters, are largely exempt from the regulation. However, providers are still encouraged to adhere to best practices and ethical guidelines to maintain trust and transparency.

Requirements for High-Risk AI Systems

High-risk AI systems are subject to a rigorous set of requirements designed to ensure their safety, transparency, and ethical use. Providers must implement a comprehensive risk management system to continuously evaluate and mitigate potential risks associated with their AI systems. This involves conducting regular assessments, monitoring the system’s performance, and taking corrective actions when necessary.

Data governance is another critical requirement for high-risk AI systems. Providers must ensure that their AI systems are trained on high-quality datasets that are representative, accurate, and free from biases. This helps to prevent discriminatory outcomes and ensures that the AI system performs reliably across different contexts and populations.

Transparency and documentation are also crucial for high-risk AI systems. Providers must maintain detailed documentation that outlines the AI system’s design, capabilities, limitations, and intended uses. This information must be made available to regulatory authorities and, where applicable, to users. Clear and comprehensive documentation helps to build trust and enables stakeholders to understand the AI system’s functioning and potential impacts.

Human oversight is an essential component of the AI Act’s requirements for high-risk AI systems. Providers must establish mechanisms to ensure that human operators can effectively monitor and control the AI system. This includes the ability to intervene in the system’s operation when necessary to prevent harmful outcomes. Human oversight helps to ensure that AI systems are used responsibly and that their actions align with ethical and legal standards.

Transparency Obligations

Transparency is a cornerstone of the AI Act, and the regulation imposes specific obligations to ensure that users are informed when interacting with AI systems. Providers must clearly disclose when users are engaging with an AI system, especially if the system influences their decisions or perceptions. This transparency requirement is crucial in maintaining trust and enabling users to make informed choices.

Furthermore, AI systems that generate deepfakes or synthetic media must disclose their artificial nature. This helps to prevent misinformation and ensures that users are aware of the AI system’s capabilities and limitations. Transparency in AI systems fosters accountability and helps to prevent deceptive practices that could undermine public trust in AI technologies.

Compliance and Enforcement

The AI Act establishes a robust framework for compliance and enforcement, with the European Artificial Intelligence Board (EAIB) overseeing the implementation of the regulation. The EAIB is responsible for monitoring compliance, conducting audits, and taking enforcement actions against non-compliant entities. This centralized oversight ensures consistency and effectiveness in the regulation’s application across the EU.

Non-compliance with the AI Act can result in significant penalties. Providers who fail to meet the regulation’s requirements can face fines of up to 6% of their global annual turnover. This stringent enforcement mechanism underscores the importance of compliance and encourages providers to adhere to the highest standards of safety, transparency, and ethical use.

Considerations for Businesses

As the AI Act introduces comprehensive requirements for the use of AI systems, businesses must take proactive steps to ensure compliance. The first step is to assess their AI systems and determine their risk classification. High-risk AI systems, in particular, will require significant adjustments to meet the regulation’s stringent requirements.

Businesses must also focus on enhancing transparency and documentation. Keeping detailed records of the AI system’s design, capabilities, and limitations is essential for demonstrating compliance and building trust with users and regulatory authorities. Ensuring transparency in AI interactions, particularly in disclosing the non-human nature of AI systems, is crucial in maintaining user trust and preventing deceptive practices.

Moreover, businesses should prioritize the ethical use of AI. Beyond mere compliance, fostering ethical practices in AI development and deployment can provide a competitive advantage. By demonstrating a commitment to ethical AI, businesses can build a positive reputation and gain the trust of customers, partners, and stakeholders.

Conclusion

The AI Act represents a significant milestone in the regulation of AI technologies, balancing the need for innovation with the imperative to protect fundamental rights and ensure ethical use. By understanding and complying with this new regulation, businesses can not only avoid penalties but also gain a competitive edge by fostering trust and reliability in their AI systems.

Unraveling Automated Decision-Making: Schufa’s Impact and Implications

On December 7 2023, the Court of Justice of the European Union (CJEU) delivered its judgment in the Schufa case, involving Schufa AG, Germany’s leading credit rating agency, holding data on nearly 70 million individuals.

Schufa provides credit scores that are relied upon by financial service providers, retailers, telecom companies, and utility firms. In a recent case, a German resident had their loan application rejected by a bank based on a credit score assigned by Schufa.

The individual contested this decision, seeking information about Schufa’s automated decision-making processes under Article 15(1)(h)  GDPR, which grants the right of access to such information.

Schufa argued that it was not responsible for the decision itself, asserting its role was limited to producing an automated score, leaving the actual decision to the third-party bank.

However, the court disagreed with Schufa’s stance. It held that the creation of the credit score is a relevant automated decision under Article 22 GDPR, challenging the belief that only the ultimate decision-maker, i.e. the bank, engages in automated decision-making.

The court rejected Schufa’s argument; It held that the creation of the credit score itself constitutes a relevant automated decision under Article 22 of the GDPR. In its judgment, the court considered the score’s “determining role” in the credit decision, adopting a broad interpretation of the term ‘decision.’

Companies employing algorithms for risk scores or similar outputs, such as identity verification and fraud detection, may be concerned about the potential impact of this judgment. Many businesses assume customers bear regulatory risks associated with decisions based on their outputs. However, careful consideration is necessary to distinguish business models from those in the Schufa case.

For example, companies should assess the extent to which customers rely on the provided output when making decisions. If the output is one of many factors considered, and especially if it holds moderate significance, exceptions to Article 22 GDPR (explicit consent or contractual necessity) should be explored.

Companies must further evaluate if the ultimate decision has a legal or comparatively significant effect. In cases where the decision’s impact is limited, exceptions under Article 22 GDPR may apply.

Schufa judgment coincides with the conclusion of the trilogue process around the EU AI Act, making it especially relevant for businesses developing AI-enabled solutions in high-risk areas, like credit decisions. The ruling is poised to influence practices in the evolving landscape of automated decision-making within 2024, as this remains an uncharted aread for the national and EU legislator.

 

Unlocking GDPR’s Synergy with AI: Insights from CNIL’s Guidance

The intersection of artificial intelligence (AI) and the General Data Protection Regulation (GDPR) has long been a subject of debate and concern. On one hand, AI presents remarkable advancements and transformative potential in various industries. On the other hand, GDPR places stringent demands on how personal data is collected, processed, and protected.

The question that arose early on is whether AI innovation and GDPR compliance may coexist harmoniously. In response to these complexities, the French data protection authority, CNIL, took a significant step by releasing official guidance that addresses the intricate relationship between artificial intelligence (AI) development and General Data Protection Regulation (GDPR) compliance. This guidance is a response to concerns raised by AI stakeholders during a call for contributions initiated on 28 July 2023.

CNIL’s primary aim is to reassure the industry by releasing a set of guidelines that emphasize the compatibility of AI system development with privacy considerations. In their own words, “[t]he development of AI systems is compatible with the challenges of privacy protection. Moreover, considering this imperative will lead to the emergence of devices, tools, and applications that are ethical and aligned with European values. It is under these conditions that citizens will place their trust in these technologies”.

The guidance comprises seven “How-to? sheets” providing valuable insights into applying core GDPR principles during the development phase of AI systems. Here are some key takeaways:

– Purpose Limitation: AI systems using personal data must be developed and used for specific, legitimate purposes. This means careful consideration of the AI system’s purpose before collecting or using personal data and avoiding overly generic descriptions. In cases where the purpose cannot be precisely determined at the development stage, a clear description of the type of system and its main possible functionalities is required.

– Data Minimization: Only essential personal data for the AI system’s purpose should be collected and used. Avoid unnecessary data collection, and implement measures to purge unneeded personal data, even for large databases.

– Data Retention: Extended data retention for training databases is allowed when justified by the legitimate purpose of AI systems. This provides flexibility to data controllers.

– Data Reuse: Reuse of databases, including publicly available data, is permissible for AI training, provided the data was collected lawfully and the purpose of reuse aligns with the initial purpose of data collection.

Additionally, CNIL’s guidance covers various other topics, including purpose defining, data protection impact assessment (DPIA), controllership determination, legal basis choice, and privacy by design.

This guidance serves as a valuable resource for businesses and organizations involved in AI systems, not only in France but also in any jurisdiction under the GDPR. It emphasizes that AI development and privacy can coexist with robust governance and content oversight.

Given that CNIL has announced two more guidance sets, AI stakeholders should stay vigilant for forthcoming directives to address evolving challenges in the AI landscape, particularly regarding personal data minimization and retention.

Additionally, as the dynamic landscape of AI and GDPR compliance is navigated, insights from other national data protection authorities are eagerly awaited. The ongoing dialogue revolves around striking the right equilibrium between innovation and data protection—a balancing act that holds the potential to benefit both progress and individual liberties.

An Irish Blessing

May the road rise to meet you,
May the wind be always at your back.
May the sun shine warm upon your face,
The rains fall soft upon your fields.
And until we meet again,
May God hold you in the palm of his hand.

May God be with you and bless you:
May you see your children’s children.
May you be poor in misfortune,
rich in blessings.
May you know nothing but happiness
From this day forward.

May the road rise up to meet you.
May the wind be always at your back.
May the warm rays of sun fall upon your home,
And may the land of a friend always be near.

May green be the grass you walk on,
May blue be the skies above you,
May pure be the joys that surround you,
May true be the hearts that love you.

European Parliament Advances Artificial Intelligence Act

In a significant development last week, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act. With a strong majority of 499 votes in favor, 28 against, and 93 abstentions, the Parliament has set the stage for discussions with EU member states to finalize the regulatory framework governing AI.

The proposed regulations aim to ensure that AI technologies developed and used within Europe align with EU rights and values, encompassing vital aspects such as human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

The forthcoming rules adopt a risk-based approach and impose obligations on both AI providers and deployers based on the potential risks associated with the AI systems. More specifically, the legislation identifies specific AI practices that will be prohibited due to their unacceptable risks. These include social scoring, which involves categorizing individuals based on their social behavior or personal characteristics.

Moreover, MEPs expanded the list to incorporate bans on intrusive and discriminatory applications of AI, such as real-time remote biometric identification in public spaces and emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.

Recognizing the need for enhanced precautions, the Parliament also emphasized the classification of high-risk AI applications. This category will now encompass AI systems that pose significant harm to people’s health, safety, fundamental rights, or the environment. Additionally, AI systems employed for voter influence, election outcomes, and recommender systems used by social media platforms with over 45 million users will be subject to the high-risk classification.

Furthermore, to ensure responsible use and accountability, providers of foundation models, a rapidly evolving area within AI, will be required to assess and mitigate potential risks related to health, safety, fundamental rights, the environment, democracy, and the rule of law. Before releasing their models in the EU market, these providers must register their models in the EU database. Generative AI systems based on such models, including ChatGPT, will need to comply with transparency requirements, disclose AI-generated content, and implement safeguards against generating illegal content. Additionally, detailed summaries of copyrighted data used for training purposes will need to be made publicly available.

Recognizing the importance of fostering AI innovation while safeguarding citizens’ rights, MEPs have also introduced exemptions for research activities and AI components provided under open-source licenses. Moreover, the legislation encourages the establishment of regulatory sandboxes, which are real-life environments created by public authorities to test AI technologies before their deployment.

The new regulations aim to empower citizens by granting them the right to file complaints regarding AI systems. Furthermore, individuals will have the right to receive explanations about decisions made by high-risk AI systems that significantly impact their fundamental rights. The role of the EU AI Office will also undergo reforms, equipping it with the responsibility to monitor the implementation of the AI rulebook.

In conclusion, the proposed regulations set clear boundaries for prohibited AI practices and establish obligations for high-risk AI applications. Moreover, they strike a balance by supporting innovation through exemptions and regulatory sandboxes while prioritizing citizen rights and accountability. As discussions continue with EU member states, the Parliament’s focus on protecting rights and enhancing AI’s regulatory framework paves the way for a future, where AI technologies align with EU values and contribute leaving a positive footprint on society.

Trademarks, Public Policy and Principles of Morality

MUNIA is a registered European Union Trademark (EUTM: 016305369) owned by Bodega ViñaGuareña, a Spanish winemaker producing high quality wines near Salamanca. As of today, their trademark enjoys protection across 27 member-states of the European Union, although their wines have not reached Greek retail stores.

But can a sign with an objectionable meaning be registered as a European Union Trademark (EUTM) ?

Not always. Article 7(1)(f) of Regulation 2017/1001 (EU) (EUTMR) provides for the refusal of trademark applications and the invalidation of registrations already effected, where trademarks are “contrary to public policy or to accepted principles of morality”.

The same provision as in the EUTMR is reflected in Article 4(1)(f) of the Trade Mark Directive, which has been transposed verbatim in Greek law, by means of Article 4 of Law 4679/2020.

The wording of the above refusal ground is very broad and could create legal tensions, as the EU trademark system is unitary in character, whereas both moral principles and the requirements of public policy may vary from country to country and evolve over time.

As a result, an objection against an EU trademark application in any member state can defeat the entire application, as under Article 7(2) ETMR an application can be rejected even if the grounds for refusal exist only in part of the European Union.

For the sake of uniformity, EUIPO Board of Appeals published a Case-law Research Report in October 2021 that establishes general principles on the assessment of applications.

Some of the most notable examples of the assessed signs over the last years, as summarised in the above report, are the following:

In SULA, (vulgar for ‘penis’ in Romanian), the Board rejected an application confirming that the goods (milk and derivates) applied for did not avoid, but in certain cases even enhanced, the link with such a sexual connotation.

Similarly, in Kona, which is a subcompact crossover SUV produced by the South Korean manufacturer Hyundai. Differently spelt, the word ‘cona’ is vulgar for ‘vagina’ in Portuguese, and in this respect the Board considered the sign to be an offensive vulgar expression for the Portuguese public, notwithstanding that the goods applied for were ‘automobiles’.

By contrast, in REVA The electriCity Car, EUIPO had found back in 2006 that in the context of electric cars and in combination with the English words ‘The ElectriCity Car’, the Finnish public will not consider the expression ‘reva’ (vulgar for ‘vagina’ in Finnish) to be intentionally abusive, but as an unfortunate choice of brand of foreign origin. EUIPO held in that case that “from time to time, the general public encounters words on imported goods and services which, if used conversationally in its own language, might be found shocking. Nevertheless, they are understood for what they are, namely as neutral foreign words carrying an unfortunate meaning in the native tongue.

The Board also allowed the registration of the trademark CUR (Romanian slang for ‘butt’) stating that the mark would not be found to be offensive in relation to IT-related specialised services, but rather as “a slightly embarrassing or even humorous example of how English- speaking undertakings can occasionally commit a linguistic ‘faux pas’ when selling their branded products globally. Moreover, the fact that the word did not address anybody in particular was also considered decisive in their assessment.

That was not the case in PAKI, however, where the General Court confirmed the Board’s assessment that, given the racist and degrading  meaning of the word for people originating from Pakistan and residing in the United Kingdom, the sign had to be refused registration irrespective of the goods and services applied for.

But would the above rejections constitute a violation of the respective applicant’s freedom of expression, enshrined in Article 10 ECHR and Article 11 of the Charter of Fundamental Rights of the European Union, or even a violation of their freedom to conduct a business pursuant to Article 16 of the Charter of Fundamental Rights of the European Union?

Under settled case-law, the refusal of a trademark application does not limit the applicant’s freedom of expression. In fact, the General Court has pointed out that it is not necessary to register a sign for it to be used for commercial purposes and that the goal of Article 7(1)(f) EUTMR is not to filter out signs whose use in commerce must at all cost be prevented.

In that sense, when EUIPO declared the trademark “BOY LONDON” invalid on the grounds that it evoked Nazi symbolism and was, therefore, contrary to the accepted principles of morality, it reiterated that the application of Article 7(1)(f) EUTMR is not a constraint to anybody’s freedom of expression, because the applicant is not prevented from using the sign but is simply refused its registration.