Not Just Watching the Dog

In Decision 21/2025, the Hellenic Data Protection Authority (HDPA) revisits a recurring misconception: that the General Data Protection Regulation (GDPR) does not apply to private households. The case involved a couple who operated a restaurant and lived on the same premises, which they monitored using a set of security cameras. A neighbouring property owner filed a complaint after discovering that at least one of the cameras recorded not only the couple’s own premises but also his adjoining land and a portion of a public street.

The HDPA reviewed the footage and found that the cameras included a rotating surveillance device with fields of view extending beyond the private domain. Despite the couple’s claim that one camera merely recorded their stable, the evidence suggested otherwise.

The Authority ruled that this type of surveillance no longer falls within the GDPR’s limited household exemption. Whenever monitoring captures public space or third-party property, it triggers full compliance obligations: lawful basis under Article 6, transparency under Article 12, data minimisation, and above all, respect for the rights of data subjects under Articles 15 et seq. GDPR.

In the operative part of the Decision, each of the two individuals was fined a total of €3.000, comprising €2.000 for infringing the principles of lawfulness, purpose limitation, and accountability under Article 5 GDPR, and €1.000 for failing to comply with the data subject’s right of access under Article 15 GDPR.

But what about domestic stuff? Although the facts of the case centred on a neighbour, the ruling serves as a strong reminder for private individuals, who use surveillance tools to monitor baby sitters, cleaners, gardeners, or other domestic workers at their household. Even in one’s home, recording another person, particularly in the context of a work relationship, is considered data processing.

This means that any surveillance carried out within a household must have a clearly documented legal basis, such as freely given consent or a legitimate interest that can be properly justified. The monitoring must be proportionate to its purpose, limited in scope, and objectively necessary. The person being monitored must be informed in a transparent way, and their rights, including access and objection, must be fully respected. Any recordings must be securely stored, with access strictly controlled.

Crucially, when private individuals monitor third parties with whom they are contractually related, they are considered data controllers under Article 4 par. 7 GDPR. Simply being a private household does not exempt one from compliance.

If a nanny can be dismissed for breaching trust, then the same standard should apply to employers, who secretly monitor them without a valid legal basis and without informing them, as the law requires.

AI Act, Easily Explained

On July 12, 2024, the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (EU AI Act) was published in the Official Journal of the European Union.

The AI Act is a legislative framework that seeks to establish clear guidelines and standards for the development, deployment, and use of AI systems across the European Union. The primary objectives of this regulation are to promote innovation, protect fundamental rights, and build trust in AI systems among users and stakeholders. By setting out stringent requirements and obligations, the AI Act aims to mitigate risks associated with AI technologies while fostering a conducive environment for technological advancement and ethical use.

Scope and Definitions

The AI Act applies to a broad range of stakeholders, including providers, users, and importers of AI systems within the EU, as well as entities outside the EU whose AI systems impact individuals within the Union. The regulation adopts an expansive definition of AI, encompassing a wide variety of technologies such as machine learning, neural networks, expert systems, and other algorithm-based solutions. This comprehensive approach ensures that the regulation remains relevant and effective in addressing the diverse applications of AI technologies.

Risk-Based Classification

One of the most significant aspects of the AI Act is its risk-based classification of AI systems. The regulation categorizes AI applications into four distinct risk levels: unacceptable risk, high risk, limited risk, and minimal risk.

AI systems that fall under the category of unacceptable risk are those that pose a threat to safety, livelihoods, or fundamental rights. Such applications are outright banned under the regulation. Examples of unacceptable risk AI include social scoring systems used by governments, which can lead to discriminatory practices and infringements on individual freedoms.

High-risk AI systems are those used in critical sectors such as healthcare, transportation, and finance. These systems are subject to stringent requirements to ensure their safety, reliability, and ethical use. Providers of high-risk AI systems must implement robust risk management frameworks, use high-quality datasets to ensure accuracy and fairness, and maintain comprehensive documentation to demonstrate compliance.

Limited risk AI systems are those that present lower risks but still require certain transparency obligations. For instance, chatbots must disclose their non-human nature to users. This ensures that users are aware they are interacting with an AI system and can make informed decisions based on this knowledge.

Minimal risk AI systems, such as spam filters, are largely exempt from the regulation. However, providers are still encouraged to adhere to best practices and ethical guidelines to maintain trust and transparency.

Requirements for High-Risk AI Systems

High-risk AI systems are subject to a rigorous set of requirements designed to ensure their safety, transparency, and ethical use. Providers must implement a comprehensive risk management system to continuously evaluate and mitigate potential risks associated with their AI systems. This involves conducting regular assessments, monitoring the system’s performance, and taking corrective actions when necessary.

Data governance is another critical requirement for high-risk AI systems. Providers must ensure that their AI systems are trained on high-quality datasets that are representative, accurate, and free from biases. This helps to prevent discriminatory outcomes and ensures that the AI system performs reliably across different contexts and populations.

Transparency and documentation are also crucial for high-risk AI systems. Providers must maintain detailed documentation that outlines the AI system’s design, capabilities, limitations, and intended uses. This information must be made available to regulatory authorities and, where applicable, to users. Clear and comprehensive documentation helps to build trust and enables stakeholders to understand the AI system’s functioning and potential impacts.

Human oversight is an essential component of the AI Act’s requirements for high-risk AI systems. Providers must establish mechanisms to ensure that human operators can effectively monitor and control the AI system. This includes the ability to intervene in the system’s operation when necessary to prevent harmful outcomes. Human oversight helps to ensure that AI systems are used responsibly and that their actions align with ethical and legal standards.

Transparency Obligations

Transparency is a cornerstone of the AI Act, and the regulation imposes specific obligations to ensure that users are informed when interacting with AI systems. Providers must clearly disclose when users are engaging with an AI system, especially if the system influences their decisions or perceptions. This transparency requirement is crucial in maintaining trust and enabling users to make informed choices.

Furthermore, AI systems that generate deepfakes or synthetic media must disclose their artificial nature. This helps to prevent misinformation and ensures that users are aware of the AI system’s capabilities and limitations. Transparency in AI systems fosters accountability and helps to prevent deceptive practices that could undermine public trust in AI technologies.

Compliance and Enforcement

The AI Act establishes a robust framework for compliance and enforcement, with the European Artificial Intelligence Board (EAIB) overseeing the implementation of the regulation. The EAIB is responsible for monitoring compliance, conducting audits, and taking enforcement actions against non-compliant entities. This centralized oversight ensures consistency and effectiveness in the regulation’s application across the EU.

Non-compliance with the AI Act can result in significant penalties. Providers who fail to meet the regulation’s requirements can face fines of up to 6% of their global annual turnover. This stringent enforcement mechanism underscores the importance of compliance and encourages providers to adhere to the highest standards of safety, transparency, and ethical use.

Considerations for Businesses

As the AI Act introduces comprehensive requirements for the use of AI systems, businesses must take proactive steps to ensure compliance. The first step is to assess their AI systems and determine their risk classification. High-risk AI systems, in particular, will require significant adjustments to meet the regulation’s stringent requirements.

Businesses must also focus on enhancing transparency and documentation. Keeping detailed records of the AI system’s design, capabilities, and limitations is essential for demonstrating compliance and building trust with users and regulatory authorities. Ensuring transparency in AI interactions, particularly in disclosing the non-human nature of AI systems, is crucial in maintaining user trust and preventing deceptive practices.

Moreover, businesses should prioritize the ethical use of AI. Beyond mere compliance, fostering ethical practices in AI development and deployment can provide a competitive advantage. By demonstrating a commitment to ethical AI, businesses can build a positive reputation and gain the trust of customers, partners, and stakeholders.

Conclusion

The AI Act represents a significant milestone in the regulation of AI technologies, balancing the need for innovation with the imperative to protect fundamental rights and ensure ethical use. By understanding and complying with this new regulation, businesses can not only avoid penalties but also gain a competitive edge by fostering trust and reliability in their AI systems.

Unraveling Automated Decision-Making: Schufa’s Impact and Implications

On December 7 2023, the Court of Justice of the European Union (CJEU) delivered its judgment in the Schufa case, involving Schufa AG, Germany’s leading credit rating agency, holding data on nearly 70 million individuals.

Schufa provides credit scores that are relied upon by financial service providers, retailers, telecom companies, and utility firms. In a recent case, a German resident had their loan application rejected by a bank based on a credit score assigned by Schufa.

The individual contested this decision, seeking information about Schufa’s automated decision-making processes under Article 15(1)(h)  GDPR, which grants the right of access to such information.

Schufa argued that it was not responsible for the decision itself, asserting its role was limited to producing an automated score, leaving the actual decision to the third-party bank.

However, the court disagreed with Schufa’s stance. It held that the creation of the credit score is a relevant automated decision under Article 22 GDPR, challenging the belief that only the ultimate decision-maker, i.e. the bank, engages in automated decision-making.

The court rejected Schufa’s argument; It held that the creation of the credit score itself constitutes a relevant automated decision under Article 22 of the GDPR. In its judgment, the court considered the score’s “determining role” in the credit decision, adopting a broad interpretation of the term ‘decision.’

Companies employing algorithms for risk scores or similar outputs, such as identity verification and fraud detection, may be concerned about the potential impact of this judgment. Many businesses assume customers bear regulatory risks associated with decisions based on their outputs. However, careful consideration is necessary to distinguish business models from those in the Schufa case.

For example, companies should assess the extent to which customers rely on the provided output when making decisions. If the output is one of many factors considered, and especially if it holds moderate significance, exceptions to Article 22 GDPR (explicit consent or contractual necessity) should be explored.

Companies must further evaluate if the ultimate decision has a legal or comparatively significant effect. In cases where the decision’s impact is limited, exceptions under Article 22 GDPR may apply.

Schufa judgment coincides with the conclusion of the trilogue process around the EU AI Act, making it especially relevant for businesses developing AI-enabled solutions in high-risk areas, like credit decisions. The ruling is poised to influence practices in the evolving landscape of automated decision-making within 2024, as this remains an uncharted aread for the national and EU legislator.

 

Unlocking GDPR’s Synergy with AI: Insights from CNIL’s Guidance

The intersection of artificial intelligence (AI) and the General Data Protection Regulation (GDPR) has long been a subject of debate and concern. On one hand, AI presents remarkable advancements and transformative potential in various industries. On the other hand, GDPR places stringent demands on how personal data is collected, processed, and protected.

The question that arose early on is whether AI innovation and GDPR compliance may coexist harmoniously. In response to these complexities, the French data protection authority, CNIL, took a significant step by releasing official guidance that addresses the intricate relationship between artificial intelligence (AI) development and General Data Protection Regulation (GDPR) compliance. This guidance is a response to concerns raised by AI stakeholders during a call for contributions initiated on 28 July 2023.

CNIL’s primary aim is to reassure the industry by releasing a set of guidelines that emphasize the compatibility of AI system development with privacy considerations. In their own words, “[t]he development of AI systems is compatible with the challenges of privacy protection. Moreover, considering this imperative will lead to the emergence of devices, tools, and applications that are ethical and aligned with European values. It is under these conditions that citizens will place their trust in these technologies”.

The guidance comprises seven “How-to? sheets” providing valuable insights into applying core GDPR principles during the development phase of AI systems. Here are some key takeaways:

– Purpose Limitation: AI systems using personal data must be developed and used for specific, legitimate purposes. This means careful consideration of the AI system’s purpose before collecting or using personal data and avoiding overly generic descriptions. In cases where the purpose cannot be precisely determined at the development stage, a clear description of the type of system and its main possible functionalities is required.

– Data Minimization: Only essential personal data for the AI system’s purpose should be collected and used. Avoid unnecessary data collection, and implement measures to purge unneeded personal data, even for large databases.

– Data Retention: Extended data retention for training databases is allowed when justified by the legitimate purpose of AI systems. This provides flexibility to data controllers.

– Data Reuse: Reuse of databases, including publicly available data, is permissible for AI training, provided the data was collected lawfully and the purpose of reuse aligns with the initial purpose of data collection.

Additionally, CNIL’s guidance covers various other topics, including purpose defining, data protection impact assessment (DPIA), controllership determination, legal basis choice, and privacy by design.

This guidance serves as a valuable resource for businesses and organizations involved in AI systems, not only in France but also in any jurisdiction under the GDPR. It emphasizes that AI development and privacy can coexist with robust governance and content oversight.

Given that CNIL has announced two more guidance sets, AI stakeholders should stay vigilant for forthcoming directives to address evolving challenges in the AI landscape, particularly regarding personal data minimization and retention.

Additionally, as the dynamic landscape of AI and GDPR compliance is navigated, insights from other national data protection authorities are eagerly awaited. The ongoing dialogue revolves around striking the right equilibrium between innovation and data protection—a balancing act that holds the potential to benefit both progress and individual liberties.

European Parliament Advances Artificial Intelligence Act

In a significant development last week, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act. With a strong majority of 499 votes in favor, 28 against, and 93 abstentions, the Parliament has set the stage for discussions with EU member states to finalize the regulatory framework governing AI.

The proposed regulations aim to ensure that AI technologies developed and used within Europe align with EU rights and values, encompassing vital aspects such as human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

The forthcoming rules adopt a risk-based approach and impose obligations on both AI providers and deployers based on the potential risks associated with the AI systems. More specifically, the legislation identifies specific AI practices that will be prohibited due to their unacceptable risks. These include social scoring, which involves categorizing individuals based on their social behavior or personal characteristics.

Moreover, MEPs expanded the list to incorporate bans on intrusive and discriminatory applications of AI, such as real-time remote biometric identification in public spaces and emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.

Recognizing the need for enhanced precautions, the Parliament also emphasized the classification of high-risk AI applications. This category will now encompass AI systems that pose significant harm to people’s health, safety, fundamental rights, or the environment. Additionally, AI systems employed for voter influence, election outcomes, and recommender systems used by social media platforms with over 45 million users will be subject to the high-risk classification.

Furthermore, to ensure responsible use and accountability, providers of foundation models, a rapidly evolving area within AI, will be required to assess and mitigate potential risks related to health, safety, fundamental rights, the environment, democracy, and the rule of law. Before releasing their models in the EU market, these providers must register their models in the EU database. Generative AI systems based on such models, including ChatGPT, will need to comply with transparency requirements, disclose AI-generated content, and implement safeguards against generating illegal content. Additionally, detailed summaries of copyrighted data used for training purposes will need to be made publicly available.

Recognizing the importance of fostering AI innovation while safeguarding citizens’ rights, MEPs have also introduced exemptions for research activities and AI components provided under open-source licenses. Moreover, the legislation encourages the establishment of regulatory sandboxes, which are real-life environments created by public authorities to test AI technologies before their deployment.

The new regulations aim to empower citizens by granting them the right to file complaints regarding AI systems. Furthermore, individuals will have the right to receive explanations about decisions made by high-risk AI systems that significantly impact their fundamental rights. The role of the EU AI Office will also undergo reforms, equipping it with the responsibility to monitor the implementation of the AI rulebook.

In conclusion, the proposed regulations set clear boundaries for prohibited AI practices and establish obligations for high-risk AI applications. Moreover, they strike a balance by supporting innovation through exemptions and regulatory sandboxes while prioritizing citizen rights and accountability. As discussions continue with EU member states, the Parliament’s focus on protecting rights and enhancing AI’s regulatory framework paves the way for a future, where AI technologies align with EU values and contribute leaving a positive footprint on society.

Trademarks, Public Policy and Principles of Morality

MUNIA is a registered European Union Trademark (EUTM: 016305369) owned by Bodega ViñaGuareña, a Spanish winemaker producing high quality wines near Salamanca. As of today, their trademark enjoys protection across 27 member-states of the European Union, although their wines have not reached Greek retail stores.

But can a sign with an objectionable meaning be registered as a European Union Trademark (EUTM) ?

Not always. Article 7(1)(f) of Regulation 2017/1001 (EU) (EUTMR) provides for the refusal of trademark applications and the invalidation of registrations already effected, where trademarks are “contrary to public policy or to accepted principles of morality”.

The same provision as in the EUTMR is reflected in Article 4(1)(f) of the Trade Mark Directive, which has been transposed verbatim in Greek law, by means of Article 4 of Law 4679/2020.

The wording of the above refusal ground is very broad and could create legal tensions, as the EU trademark system is unitary in character, whereas both moral principles and the requirements of public policy may vary from country to country and evolve over time.

As a result, an objection against an EU trademark application in any member state can defeat the entire application, as under Article 7(2) ETMR an application can be rejected even if the grounds for refusal exist only in part of the European Union.

For the sake of uniformity, EUIPO Board of Appeals published a Case-law Research Report in October 2021 that establishes general principles on the assessment of applications.

Some of the most notable examples of the assessed signs over the last years, as summarised in the above report, are the following:

In SULA, (vulgar for ‘penis’ in Romanian), the Board rejected an application confirming that the goods (milk and derivates) applied for did not avoid, but in certain cases even enhanced, the link with such a sexual connotation.

Similarly, in Kona, which is a subcompact crossover SUV produced by the South Korean manufacturer Hyundai. Differently spelt, the word ‘cona’ is vulgar for ‘vagina’ in Portuguese, and in this respect the Board considered the sign to be an offensive vulgar expression for the Portuguese public, notwithstanding that the goods applied for were ‘automobiles’.

By contrast, in REVA The electriCity Car, EUIPO had found back in 2006 that in the context of electric cars and in combination with the English words ‘The ElectriCity Car’, the Finnish public will not consider the expression ‘reva’ (vulgar for ‘vagina’ in Finnish) to be intentionally abusive, but as an unfortunate choice of brand of foreign origin. EUIPO held in that case that “from time to time, the general public encounters words on imported goods and services which, if used conversationally in its own language, might be found shocking. Nevertheless, they are understood for what they are, namely as neutral foreign words carrying an unfortunate meaning in the native tongue.

The Board also allowed the registration of the trademark CUR (Romanian slang for ‘butt’) stating that the mark would not be found to be offensive in relation to IT-related specialised services, but rather as “a slightly embarrassing or even humorous example of how English- speaking undertakings can occasionally commit a linguistic ‘faux pas’ when selling their branded products globally. Moreover, the fact that the word did not address anybody in particular was also considered decisive in their assessment.

That was not the case in PAKI, however, where the General Court confirmed the Board’s assessment that, given the racist and degrading  meaning of the word for people originating from Pakistan and residing in the United Kingdom, the sign had to be refused registration irrespective of the goods and services applied for.

But would the above rejections constitute a violation of the respective applicant’s freedom of expression, enshrined in Article 10 ECHR and Article 11 of the Charter of Fundamental Rights of the European Union, or even a violation of their freedom to conduct a business pursuant to Article 16 of the Charter of Fundamental Rights of the European Union?

Under settled case-law, the refusal of a trademark application does not limit the applicant’s freedom of expression. In fact, the General Court has pointed out that it is not necessary to register a sign for it to be used for commercial purposes and that the goal of Article 7(1)(f) EUTMR is not to filter out signs whose use in commerce must at all cost be prevented.

In that sense, when EUIPO declared the trademark “BOY LONDON” invalid on the grounds that it evoked Nazi symbolism and was, therefore, contrary to the accepted principles of morality, it reiterated that the application of Article 7(1)(f) EUTMR is not a constraint to anybody’s freedom of expression, because the applicant is not prevented from using the sign but is simply refused its registration.

European Union Reins in Big Tech

Οn Tuesday, 5 July 2022, the European Parliament held the final vote on the new Digital Services Act (DSA) and Digital Markets Act (DMA), two bills that aim to address the societal and economic effects of the tech industry by setting clear standards for how they operate and provide services in the EU, in line with the EU’s fundamental rights and values.

What is illegal offline, should be illegal online

The Digital Services Act (DSA) sets clear obligations for digital service providers, such as social media or marketplaces, to tackle the spread of illegal content, online disinformation and other societal risks. These requirements are proportionate to the size and risks platforms pose to society.

The new obligations include:

    • New measures to counter illegal content online and obligations for platforms to react quickly, while respecting fundamental rights, including the freedom of expression and data protection;
    • Strengthened traceability and checks on traders in online marketplaces to ensure products and services are safe; including efforts to perform random checks on whether illegal content resurfaces;
    • Increased transparency and accountability of platforms, for example by providing clear information on content moderation or the use of algorithms for recommending content (so-called recommender systems); users will be able to challenge content moderation decisions;
    • Bans on misleading practices and certain types of targeted advertising, such as those targeting children and ads based on sensitive data. The so-called “dark patterns” and misleading practices aimed at manipulating users’ choices will also be prohibited.

Very large online platforms and search engines (with 45 million or more monthly users), which present the highest risk, will have to comply with stricter obligations, enforced by the Commission. These include preventing systemic risks (such as the dissemination of illegal content, adverse effects on fundamental rights, on electoral processes and on gender-based violence or mental health) and being subject to independent audits. These platforms will also have to provide users with the choice to not receive recommendations based on profiling. They will also have to facilitate access to their data and algorithms to authorities and vetted researchers.

A list of “do’s” and “don’ts” for Gatekeepers

The Digital Markets Act (DMA) sets obligations for large online platforms acting as “gatekeepers” (platforms whose dominant online position make them hard for consumers to avoid) on the digital market to ensure a fairer business environment and more services for consumers.

To prevent unfair business practices, those designated as gatekeepers will have to:

    • allow third parties to inter-operate with their own services, meaning that smaller platforms will be able to request that dominant messaging platforms enable their users to exchange messages, send voice messages or files across messaging apps. This will give users greater choice and avoid the so-called “lock-in” effect where they are restricted to one app or platform;
    • allow business users to access the data they generate in the gatekeeper’s platform, to promote their own offers and conclude contracts with their customers outside the gatekeeper’s platforms.

Gatekeepers can no longer:

    • Rank their own services or products more favourably (self-preferencing) than other third parties on their platforms;
    • Prevent users from easily un-installing any pre-loaded software or apps, or using third-party applications and app stores;
    • Process users’ personal data for targeted advertising, unless consent is explicitly granted.
Sanctions

To ensure that the new rules on the DMA are properly implemented and in line with the dynamic digital sector, the Commission can carry out market investigations. If a gatekeeper does not comply with the rules, the Commission can impose fines of up to 10% of its total worldwide turnover in the preceding financial year, or up to 20% in case of repeated non-compliance.

Next Steps

Once formally adopted by the Council in July (DMA) and September (DSA), both acts will be published in the EU Official Journal and enter into force twenty days after publication.

The DSA will be directly applicable across the EU and will apply fifteen months or from 1 January 2024 (whichever comes later) after the entry into force. As regards the obligations for very large online platforms and very large online search engines, the DSA will apply earlier – four months after they have been designated as such by the Commission.

The DMA will start to apply six months following its entry into force. The gatekeepers will have a maximum of six months after they have been designated to comply with the new obligations.

Source: European Parliament

The EU Digital Markets Act

The EU has recently unveiled its much-expected landmark proposal for a Digital Markets Act (DMS). Twenty years after the introduction of the eCommerce Directive, the DMA envisages a new legal basis for competition and platform management, covering everything from content moderation  to app stores, search and self-preferencing.

The DMA introduces rules for platforms that act as “gatekeepers” in the digital sector. These are platforms that have a significant impact on the internal market, serve as an important gateway for business users to reach their customers, and which enjoy, or will foreseeably enjoy, an entrenched and durable position. This can grant them the power to act as private rule-makers and to function as bottlenecks between businesses and consumers.

With an eye mainly to US big-tech, the Digital Markets Act is set to prevent gatekeepers from imposing unfair conditions on businesses and consumers and ensure the openness of important digital services. Examples of these unfair conditions that gatekeepers sometimes impose on others include prohibiting businesses from accessing their own data when operating on these platforms, or situations where users are locked into a particular service and have limited options for migrating to alternative service providers.

Gatekeeper on the historic Banco Santander, Lisbon.

The enforcement system of the DMA is of particular importance, as the proposal does not seem to leave much space to national authorities. In fact, the European Commission shall be vested with extensive investigative powers (see Articles 19-21) and shall be able to impose fines and periodic penalty payments in case of non-compliance (Articles 26-27) of the same magnitude as in antitrust cases (up to 10% of annual turnover and 5% of daily turnover for fines and periodic penalty payments respectively).

In case of systematic non-compliance that has further strengthened or extended the gatekeeper’s position, the Commission may even impose behavioral or even structural remedies on the gatekeeper, including divestiture (Article 16). Structural remedies are a last resort penalty and can be imposed only if there are no equally effective behavioral remedies. The European Commission may also issue interim measures (Article 22) and accept commitments offered by the gatekeeper (Article 23).

Together with the Digital Services Act, the DMA is oriented at providing better protection to consumers and to fundamental rights online, establishing a powerful transparency and accountability framework for online platforms and leading to fairer and more open digital markets.

Harmonised across the EU and directly applicable, the new rules will make it easier to provide digital innovations across borders, while ensuring the same level of protection to all citizens in the EU.

Further information can be retrieved from the Commission’s dedicated webpage.

New Rules to Improve Fairness within the Online Platform Economy

Over the past decade, online platforms (such as Shopify, Magento, Etsy, etc.) have established their presence as important economic players, connecting economic actors and boosting efficiency while spurring innovation and new business models.

As of today, they play an important role in many industries, since they allow buyers and sellers of goods and services to trade and communicate with each other. At the same time, they create network effects, and raise new issues related to fairness, transparency, and market distortions.

This ecosystem is now regulated by means of Regulation 2019/1150 on online platform-to-business relationships (P2B Regulation).

The regulation, which directly applies throughout the Union since 11 July 2020, has introduced a set of transparency rules to be followed by online platforms in their relations with business users, to address unfair and potentially harmful contractual clauses and trading practices, and lack of effective redress.

Its scope covers online intermediation services and online search engines provided, or offered to be provided, to business users and corporate website users, respectively, that have their place of establishment or residence in the Union and that, through those online intermediation services or online search engines, offer goods or services to consumers located in the Union, irrespective of the place of establishment or residence of the providers of those services and irrespective of the law otherwise applicable.

The key points covered by the regulation can be summarized as follows:

    • Terms and Conditions will have to be written in plain and intelligible language;
    • Business users will have to be informed of any modification of the Terms and Conditions;
    • Platforms will have to respect a reasonable notice period depending on the nature of the modification (minimum is fixed at 15 days) unless a business user gives an explicit agreement for this period to be shortened;
    • Providers of online intermediation services will have to provide business users with the reasons for restricting or suspending individual products/ services;
    • In case of definitive termination of the online intermediation service offered, the platform will provide the business user concerned with a statement of reasons at least 30 days in advance;
    • The providers of these services have to formulate and publish general policies on what data generated through their services can be accessed, by whom and under what conditions;
    • Providers of online intermediation services as well as online search engines will be required to clearly inform businesses about the main parameters determining how goods and services are ranked;
    • Online search engines should be transparent about any preferential treatment they give to their own products and services offered through their search sites;
    • Providers of online intermediation services will be required to explain the use of contract clauses demanding the most favourable range or price of products and services offered by their professional users;
    • Online platforms will have to set up or have in place internal complaint handling mechanisms (small enterprises with less than 50 staff members and generating ≤€10 million turnover will be exempted from this obligation);
    • Business users will have access to out-of-court dispute settlement through easily accessible external mediators (small enterprises with less than 50 staff members and generating <€10 million turnover will be exempted from this obligation);
    • Representative organisations or associations will be able to defend businesses in courts against possible infringements of the proposed rules by online platforms or search engines.

Furhtermore, an EU Observatory of the Online Platform Economy has been established to look into the current and emerging challenges and opportunities for the EU in the online economy. The observatory shall be monitoring online trends, the evolution of trading practices, and the development of national policies, in order to monitor, anticipate and solve issues arising in the online economy.

Hellenic Data Protection Authority’s Take on Law 4624/2019

Under the threat of hefty financial sanctions, Greece enacted hastily Law 4624/2019 (“Greek GDPR Law”) last summer, in order to align the domestic data protection framework with the GDPR. The Greek GDPR Law also provided for specific rules on certain topics based on the GDPR’s broad opening clauses, permitting EU member states such as Greece to enact national legislation.

Following a period of uncertainty, the Hellenic Data Protection Authority (“HDPA”) published Opinion 1/2020, whereby they reviewed certain key or contested aspects of the Greek GDPR Law and provided much needed clarity on their compatibility with the Regulation.

In fact, by reiterating Commission’s guidance on the direct application of GDPR dated 24.01.2018, the HDPA stressed that when adapting their national legislation, Member States have to take into account the fact that any national measures which may create an obstacle to the direct applicability of GDPR and this way jeopardise its simultaneous and uniform application throughout EU are contrary to Union Law.

Repeating the text of regulations in national law, opined the HDPA, is also prohibited, unless such repetitions are strictly necessary for the sake of coherence and in order to make national laws comprehensible to those to whom they apply. In fact, reproducing the text of GDPR mot-à-mot in national specification law should be exceptional and justified, and cannot be used to add additional conditions or interpretations to the text of the regulation. This was not the case, however, with Greek GDPR Law, where several GDPR provisions were repeated verbatim and exceptions were introduced without any particular justification.

More particularly, HDPA pointed out that the interpretation of the Regulation should be left to the European courts (meaning the national courts and ultimately the European Court of Justice) and not to the Member States’ legislators. The national legislator can therefore neither copy the GDPR text when this is not necessary in the light of the criteria provided by the case law, nor interpret it or add additional conditions to the rules directly applicable under GDPR, said the Athority. If they did so, commercial entities throughout the Union would again be faced with fragmentation and would not know which rules they have to obey.

In view of the above, the HDPA noted that they shall not be applying Greek GDRP Law provisions, which: (a) are deemed not in line with GDPR, and/or (b) are not based on opening clauses, which make it possible for Member States to lay down specific national arrangements.

As regards personal data of employees, in particular, the HDPA clarified that the national legislator is not allowed to introduce new grounds for lawful processing other than those already set out in Art. 6 GDPR. In fact, processing under the GDPR framework can be lawful only on the basis of one of six specified conditions set out in Article 6(1)(a) to (f). Identifying the appropriate legal basis is of essential importance and controllers must take into account the impact on data subjects’ rights when identifying the appropriate lawful basis so as to fully respect the principle of fairness.

In this context, the Authority stressed that Art. 6 par. 1 (b) GDPR, which has been chosen by Greek legislator as the main processing legal ground, may sometimes be actually unfit in the employment environemnt. In fact, activities such as processing of biometric data, geolocation, monitoring of electronic media, whistleblowing policies ect. should be based on Art. 6 par. 1 (e) GDPR (processing necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller) or Art. 6 par. 1 (f) (processing necessary for the purposes of a legitimate interest) instead. This way, employees are able to challenge separate processing activities and perform their rights under GDPR, without the terms of their employment contract being challenged.

The matters handled with Opinion 1/2020 were not exhaustive and that is why HDPA explicitly reserved judgment on the compatibility of all other Greek GDPR Law provisions, which have not yet come under the spotlight.

As the case may be, it remains to be seen how Greek GDPR Law provisions shall be interpreted by Greek courts, once challenged by stakeholders, who are all those affected by the new rules (the business community and other organisations processing data, the public sector and citizens). The dust has not settlled yet, the winds of data regulation keep blowing strongly.

Air (Hera orders Aeolus to release the winds) (Aeneid I) by Charles Dupuis (1718)