Unlocking GDPR’s Synergy with AI: Insights from CNIL’s Guidance

The intersection of artificial intelligence (AI) and the General Data Protection Regulation (GDPR) has long been a subject of debate and concern. On one hand, AI presents remarkable advancements and transformative potential in various industries. On the other hand, GDPR places stringent demands on how personal data is collected, processed, and protected.

The question that arose early on is whether AI innovation and GDPR compliance may coexist harmoniously. In response to these complexities, the French data protection authority, CNIL, took a significant step by releasing official guidance that addresses the intricate relationship between artificial intelligence (AI) development and General Data Protection Regulation (GDPR) compliance. This guidance is a response to concerns raised by AI stakeholders during a call for contributions initiated on 28 July 2023.

CNIL’s primary aim is to reassure the industry by releasing a set of guidelines that emphasize the compatibility of AI system development with privacy considerations. In their own words, “[t]he development of AI systems is compatible with the challenges of privacy protection. Moreover, considering this imperative will lead to the emergence of devices, tools, and applications that are ethical and aligned with European values. It is under these conditions that citizens will place their trust in these technologies”.

The guidance comprises seven “How-to? sheets” providing valuable insights into applying core GDPR principles during the development phase of AI systems. Here are some key takeaways:

– Purpose Limitation: AI systems using personal data must be developed and used for specific, legitimate purposes. This means careful consideration of the AI system’s purpose before collecting or using personal data and avoiding overly generic descriptions. In cases where the purpose cannot be precisely determined at the development stage, a clear description of the type of system and its main possible functionalities is required.

– Data Minimization: Only essential personal data for the AI system’s purpose should be collected and used. Avoid unnecessary data collection, and implement measures to purge unneeded personal data, even for large databases.

– Data Retention: Extended data retention for training databases is allowed when justified by the legitimate purpose of AI systems. This provides flexibility to data controllers.

– Data Reuse: Reuse of databases, including publicly available data, is permissible for AI training, provided the data was collected lawfully and the purpose of reuse aligns with the initial purpose of data collection.

Additionally, CNIL’s guidance covers various other topics, including purpose defining, data protection impact assessment (DPIA), controllership determination, legal basis choice, and privacy by design.

This guidance serves as a valuable resource for businesses and organizations involved in AI systems, not only in France but also in any jurisdiction under the GDPR. It emphasizes that AI development and privacy can coexist with robust governance and content oversight.

Given that CNIL has announced two more guidance sets, AI stakeholders should stay vigilant for forthcoming directives to address evolving challenges in the AI landscape, particularly regarding personal data minimization and retention.

Additionally, as the dynamic landscape of AI and GDPR compliance is navigated, insights from other national data protection authorities are eagerly awaited. The ongoing dialogue revolves around striking the right equilibrium between innovation and data protection—a balancing act that holds the potential to benefit both progress and individual liberties.

European Parliament Advances Artificial Intelligence Act

In a significant development last week, the European Parliament adopted its negotiating position on the Artificial Intelligence (AI) Act. With a strong majority of 499 votes in favor, 28 against, and 93 abstentions, the Parliament has set the stage for discussions with EU member states to finalize the regulatory framework governing AI.

The proposed regulations aim to ensure that AI technologies developed and used within Europe align with EU rights and values, encompassing vital aspects such as human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

The forthcoming rules adopt a risk-based approach and impose obligations on both AI providers and deployers based on the potential risks associated with the AI systems. More specifically, the legislation identifies specific AI practices that will be prohibited due to their unacceptable risks. These include social scoring, which involves categorizing individuals based on their social behavior or personal characteristics.

Moreover, MEPs expanded the list to incorporate bans on intrusive and discriminatory applications of AI, such as real-time remote biometric identification in public spaces and emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.

Recognizing the need for enhanced precautions, the Parliament also emphasized the classification of high-risk AI applications. This category will now encompass AI systems that pose significant harm to people’s health, safety, fundamental rights, or the environment. Additionally, AI systems employed for voter influence, election outcomes, and recommender systems used by social media platforms with over 45 million users will be subject to the high-risk classification.

Furthermore, to ensure responsible use and accountability, providers of foundation models, a rapidly evolving area within AI, will be required to assess and mitigate potential risks related to health, safety, fundamental rights, the environment, democracy, and the rule of law. Before releasing their models in the EU market, these providers must register their models in the EU database. Generative AI systems based on such models, including ChatGPT, will need to comply with transparency requirements, disclose AI-generated content, and implement safeguards against generating illegal content. Additionally, detailed summaries of copyrighted data used for training purposes will need to be made publicly available.

Recognizing the importance of fostering AI innovation while safeguarding citizens’ rights, MEPs have also introduced exemptions for research activities and AI components provided under open-source licenses. Moreover, the legislation encourages the establishment of regulatory sandboxes, which are real-life environments created by public authorities to test AI technologies before their deployment.

The new regulations aim to empower citizens by granting them the right to file complaints regarding AI systems. Furthermore, individuals will have the right to receive explanations about decisions made by high-risk AI systems that significantly impact their fundamental rights. The role of the EU AI Office will also undergo reforms, equipping it with the responsibility to monitor the implementation of the AI rulebook.

In conclusion, the proposed regulations set clear boundaries for prohibited AI practices and establish obligations for high-risk AI applications. Moreover, they strike a balance by supporting innovation through exemptions and regulatory sandboxes while prioritizing citizen rights and accountability. As discussions continue with EU member states, the Parliament’s focus on protecting rights and enhancing AI’s regulatory framework paves the way for a future, where AI technologies align with EU values and contribute leaving a positive footprint on society.

European Union Reins in Big Tech

Οn Tuesday, 5 July 2022, the European Parliament held the final vote on the new Digital Services Act (DSA) and Digital Markets Act (DMA), two bills that aim to address the societal and economic effects of the tech industry by setting clear standards for how they operate and provide services in the EU, in line with the EU’s fundamental rights and values.

What is illegal offline, should be illegal online

The Digital Services Act (DSA) sets clear obligations for digital service providers, such as social media or marketplaces, to tackle the spread of illegal content, online disinformation and other societal risks. These requirements are proportionate to the size and risks platforms pose to society.

The new obligations include:

    • New measures to counter illegal content online and obligations for platforms to react quickly, while respecting fundamental rights, including the freedom of expression and data protection;
    • Strengthened traceability and checks on traders in online marketplaces to ensure products and services are safe; including efforts to perform random checks on whether illegal content resurfaces;
    • Increased transparency and accountability of platforms, for example by providing clear information on content moderation or the use of algorithms for recommending content (so-called recommender systems); users will be able to challenge content moderation decisions;
    • Bans on misleading practices and certain types of targeted advertising, such as those targeting children and ads based on sensitive data. The so-called “dark patterns” and misleading practices aimed at manipulating users’ choices will also be prohibited.

Very large online platforms and search engines (with 45 million or more monthly users), which present the highest risk, will have to comply with stricter obligations, enforced by the Commission. These include preventing systemic risks (such as the dissemination of illegal content, adverse effects on fundamental rights, on electoral processes and on gender-based violence or mental health) and being subject to independent audits. These platforms will also have to provide users with the choice to not receive recommendations based on profiling. They will also have to facilitate access to their data and algorithms to authorities and vetted researchers.

A list of “do’s” and “don’ts” for Gatekeepers

The Digital Markets Act (DMA) sets obligations for large online platforms acting as “gatekeepers” (platforms whose dominant online position make them hard for consumers to avoid) on the digital market to ensure a fairer business environment and more services for consumers.

To prevent unfair business practices, those designated as gatekeepers will have to:

    • allow third parties to inter-operate with their own services, meaning that smaller platforms will be able to request that dominant messaging platforms enable their users to exchange messages, send voice messages or files across messaging apps. This will give users greater choice and avoid the so-called “lock-in” effect where they are restricted to one app or platform;
    • allow business users to access the data they generate in the gatekeeper’s platform, to promote their own offers and conclude contracts with their customers outside the gatekeeper’s platforms.

Gatekeepers can no longer:

    • Rank their own services or products more favourably (self-preferencing) than other third parties on their platforms;
    • Prevent users from easily un-installing any pre-loaded software or apps, or using third-party applications and app stores;
    • Process users’ personal data for targeted advertising, unless consent is explicitly granted.
Sanctions

To ensure that the new rules on the DMA are properly implemented and in line with the dynamic digital sector, the Commission can carry out market investigations. If a gatekeeper does not comply with the rules, the Commission can impose fines of up to 10% of its total worldwide turnover in the preceding financial year, or up to 20% in case of repeated non-compliance.

Next Steps

Once formally adopted by the Council in July (DMA) and September (DSA), both acts will be published in the EU Official Journal and enter into force twenty days after publication.

The DSA will be directly applicable across the EU and will apply fifteen months or from 1 January 2024 (whichever comes later) after the entry into force. As regards the obligations for very large online platforms and very large online search engines, the DSA will apply earlier – four months after they have been designated as such by the Commission.

The DMA will start to apply six months following its entry into force. The gatekeepers will have a maximum of six months after they have been designated to comply with the new obligations.

Source: European Parliament

Cookies should come with a consent

On October 1, 2019, the Court of Justice of the European Union (CJEU) ruled that storing cookies on an Internet user’s computer requires active consent. Consent cannot be implied or assumed and therefore a pre-ticked checkbox is insufficient (the press release can be found here).

The CJEU ruling stems from a 2013 case, in which the German Federation of Consumer Organizations (GFCO) took legal action against online lottery company Planet49. Planet49’s website actually required customers to consent to the storage of cookies in order to participate in a promotional lottery; as part of entering the lottery, participants were presented with two separate checkboxes: The first one was an unticked marketing checkbox, in case the user wished to be receiving third-party advertising. The second one, though, was a pre-ticked box allowing Planet49 to set cookies to track the user’s behavior online. The GFCO argued that this practice was illegal, since the authorization to set cookies did not involve explicit consent from the user.

In fact, the CJEU agreed with the GFCO in its finding that Planet49 is required to obtain active consent from its users, such consent not being possible in the form of a pre-selected checkbox. This active consent, ruled the Court, is required without any further differentiation, in particular, between strictly necessary cookies, reach measurement cookies or tracking cookies; the CJEU adopts this way the view that the cookie consent requirement applies regardless of whether or not the information accessed through the cookie is personal data within the definition of the GDPR.

Furhtermore, according to the CJEU it would “appear impossible” to objectively ascertain whether a user has provided informed consent by not deselecting a pre-ticked check-box, as the user may simply have not noticed the checkbox, or read its accompanying information before continuing with his or her activity on the website. Further to that, the CJEU held that active consent is expressly set out in GDPR, where recital 32 expressly precludes “silence, pre-ticked boxes or inactivity” from constituting consent.

In view of the above reasonings, it seems that consent obtained for placing cookies with the help of pre-ticked boxes, or through inaction or action without intent to give consent, even prior to the GDPR entering into force, has been unlawfully obtained. So it now remains to be seen if any action by supervisory authorities shall ensue, to tackle some of those data collection practices relying on unlawfully obtained consent.

As the case may be, following years of disparate approaches by national transposition laws and supervisory authorities, the ruling in Planet49 has introduced a much needed clarity on how the “cookie banner” and “cookie consent” provisions in the ePrivacy Directive should be applied.

In this regard, the Planet49 case is likely to have an impact on the ePrivacy regulation ongoing negotiations, which is set to regulate cookie usage in the not-so-distant future. Until this time arrives, website owners wishing to avoid any “kitchen accidents” would be well advised to request cookie consent for all cookies other than cookies that are technically required to properly operate their website. That is, marketing, tracking, and analytics cookies may only be used with explicit, clear, informed and prior consent, provided by means of a consent management tool.

Personal Data Protection in the Employment Context

Article 29 Working Party has recently adopted Opinion 2/2017 on data processing at work. By elaborating nine hypothetical scenarios, the Opinion builds on Opinion 8/2001 and its 2002 Working Document on the surveillance of electronic communications in the workplace, and attempts to regulate other types of monitoring technologies such as cloud services, vehicle tracking, smart devices etc.

Over the last couple of years, these technologies have posed significant new challenges to privacy and data protection at only a fraction of the costs. As a result, Opinion 2/2017 now attempts to strike a new balance between the legitimate interests of employers and the reasonable privacy expectations of the employees.

This balance is made in light of the Data Protection Directive and the General Data Protection Regulation. The golden rules confirmed therewith are the following:

  • employers should always bear in mind the fundamental data protection principles, irrespective of the technology used;
  • the contents of electronic communications made from business premises enjoy the same fundamental rights protections as analogue communications;
  • consent is highly unlikely to be a legal basis for data processing at work, unless employees can refuse without adverse consequence;
  • performance of a contract and legitimate interests can sometimes be invoked, provided the processing is strictly necessary for a legitimate purpose and complies with the principles of proportionality and subsidiarity;
  • employees should receive effective information about the monitoring that takes place; and
  • any international transfer of employee data should take place only where an adequate level of protection is ensured.

In its concluding remarks, Opinion 2/2017 stresses that data processing at work must be a proportionate response to the risks faced by an employer. Internet misuse, for example, can be detected without the necessity of analysing website content. If misuse can be prevented (e.g., by using web filters) the employer has no general right to monitor.

Furthermore, a blanket ban on communication for personal reasons is impractical and enforcement may require a level of monitoring that may be disproportionate. Prevention should be therefore given much more weight than detection – the interests of the employer are better served by preventing internet misuse through technical means than by expending resources in detecting misuse.

With regard to data minimization, it is emphasized that the information registered from the ongoing monitoring, as well as the information that is shown to the employer, should be minimized as much as possible. Employees for example should have the possibility to temporarily shut off location tracking, if justified by the circumstances. Employers in their turn are required to take the principle of data minimization by design into account when deciding on the deployment of new technologies. The information should be stored for the minimum amount of time needed with a retention period specified, and whenever it is no longer needed it should be deleted.

 

 

Hellenic Data Protection Authority rules on the “right to be forgotten”

Ulysses and His Companions in the Land of the Lotus-Eaters, etching and engraving by Theodoor van Thulden.

Following the path of Google v. Spain, whereby the European Court of Justice ruled that European citizens can request commercial search firms to remove links to information deemed “inaccurate, inadequate, irrelevant or excessive” for the purposes of data processing, the Hellenic Data Protection Authority issued Decision 83/2016, dealing with a similar case of Greek interest.

The Decision came after a licensed obstetrician complained to the Greek data protection watchdog against Google’s denial to remove a link about a criminal conviction against him for child adoption fraud.

In its reply to the contested removal request, Google considered: (a) the relevance and truthfulness of the data, (b) the fact that the applicant was practicing a regulated profession as a physician, and (c) the severity of the crime for which he was sentenced and its relevance to his profession (proxy attempt at illegal adoption of a minor at gainful employment). Namely, the company’s reply had as follows:

“In this case it appears that the URL in question relates to matters of substantial interest to the public regarding your professional life. For example, this URL may be of interest to potential or current consumers, users, or participants of your services. Information about recent professions or businesses you were involved with may also be of interest to potential or current, users, or participants of your services. Accordingly, the reference to this document in our search results for your name is justified by the interest of the general public in having access to it.”

Following a complaint lodged with the Hellenic Data Protection Authority, the authority examined whether Google’s negative response had met the de-listing criteria provided by the Article 29 Working Party. Opining that the company failed to do so, they ordered Google to remove the contested link on the ground that the data it linked to was inaccurate. Its inaccuracy lied on the fact that the criminal conviction had been replaced – though not entirely overruled – by a milder sentence  by the court of appeal at a later time.

Decision 83/2016 may open the door to complaints of similar nature before the Hellenic Data Protection Authority and heralds the liability of search engines in Greece for the content they link to with regard to privacy. Forthcoming jurisprudence by Greek courts is eagerly anticipated, as balance should now be stricken between a novel “right to be forgotten” and other fundamental rights, such as the freedom of expression and the freedom of the press.

Generalized data retention not compatible with EU law

In 2006 the EU issued its Data Retention Directive. According to that Directive, EU Member States had to store electronic telecommunications data for at least six months and at most 24 months for investigating, detecting and prosecuting serious crime.

The directive was invalidated by the CJEU with its Digital Rights Ireland judgment in 2014, where it held that the directive provided insufficient safeguards against interferences with the rights to privacy and data protection.

In the aftermath of the above judgement, two references for a preliminary ruling were made to the Court, in relation to the general obligation imposed, in Sweden and in the UK, on providers of electronic communications services to retain their clients’ data.

In its eagerly anticipated Judgment in Joined Cases C-203/15, C-698/15, the Court ruled that EU law precludes a general and indiscriminate retention of traffic data and location data by the national legislation of the member-states. Targeted retention of data may only be allowed as a preventive measure, said the Court, when this is solely for the purpose of fighting serious crime. Even in this exceptional case, however, such retention should be limited to what is strictly necessary, with respect to the categories of the retained data, the means of communication affected, the duration and the persons concerned.

Finally, the Court held that access of national authorities to the retained data must be subject to certain conditions, including prior review by an independent authority and the data being retained within the EU.

In Greece, Law 3917/2011 that transposed the Data Retention Directive is still in force and obliges providers of electronic communications services to identify and retain the source, destination, date, time, duration, type and equipment of a communication for 12 months. The list of data retained only excludes the content of the information communicated and may easily identify a wide ambit of the citizens’ social interactions, a situation that leaves their data vulnerable to uses  potentially detrimental to privacy or, more broadly, fraudulent or even malicious.

The recent CJEU judgment is expected to trigger some activity at both a judicial and legislative national level, leading to the annulment and/or amendment of the relevant law. This amendment, however, should be effected in such a way that public safety is efficiently safeguarded, whereas at the same time no compromises are made to the rights of natural persons with regard to the processing of their personal data.

Law in the Age of Big Data

The following opening paragraphs could be from any contemporary data privacy journal:

“The creation of advanced computer technology has resulted in jurists having to face a range of new and awkward problems. Through interlinking, copying and other automated data processing, modern technology has made it possible to, collect, compare, and combine enormous amounts of data about every person. Also data that in and of itself is not secret can, through its currency, quantity and internal correlation place the individual under the magnifying glass and expose much of his private life …”

What makes the quote unique, though, is that it was written back in 1978, well before the internet started impacting culture and commerce, by Professor Michael Bogdan of Lund University, and published in the Swedish law journal “Svensk Juristtidning”.

The article addresses the world’s first national data privacy law, that of Sweden, elaborating private international law issues stemming from the complexities surrounding dataflykt (‘data drain’ or ‘data flight’).

Since 1978 technological advancements in the field of data processing have been breathtaking, creating challenges that were previously contemplated only by sci-fi novelists. The legal discourse, however, has not managed to keep pace. The relevance of the article referred to above highlights, before anything, that the law will never manage to keep up with the pace of technological developments. How far behind, however, should we accept it to be?

In the end, perhaps, it would be more meaningful if we distinguished between “legal thinking and knowledge” on the one hand, and “legal principles” on the other. Professor Bogdan’s 1978 article shows that academic commentary on the relevant legal issues was already then at an advanced stage. Looking, however at modern technological applications such as facial recognition, or Internet of Things, it is striking in how much detail the legal issues arising therefrom are analyzed by the academia and the international legal community. By realizing this, we may arguably assume that legal thinking and knowledge, as such, is not necessarily always the tortoise while technology is the hare disappearing into the horizon.

The question here, as posed by Dr Christopher Kuner, Editor-in-Chief of the Journal International Data Privacy Law, in an editorial note he published back in 2014, is how we can speed up this conversion of legal thinking and knowledge into appropriate legal principles and rules. This key challenge remains to be addressed.

Privacy Shield: stronger protection for transatlantic data flows

In the aftermath of the CJEU judgment in Maximillian Schrems vs Data Protection Commissioner case (Case C-362/14), and following the invalidation of the Safe Harbor in October 2015, the European Commission and the U.S. Government reached a political agreement on a new framework for transatlantic data transfers on Tuesday, 2 February 2016: the EU-U.S. Privacy Shield (IP/16/216).

The adoption procedure of the decision texts was finalized by the European Commission on 12 July 2016, after the relevant opinion of the article 29 working party (national data protection authorities) and the European Parliament resolution of 26 May.

This new framework protects the fundamental rights of anyone in the EU whose personal data is transferred to the United States and offers legal clarity for businesses relying on transatlantic data transfers. The Shield has been drafted, among others, to further illuminate the bulk collection of data, strengthen the Ombudsperson mechanism, and set more explicit obligations on companies as regards limits on retention and onward transfers.

The EU-U.S. Privacy Shield summarily touches upon the following points:

  • Strong obligations on companies handling data: under the new arrangement, U.S. companies wishing to abide by the EU-U.S Privacy Shield will be obliged to register in the Privacy Shield register and re-certify annually. Moreover, their privacy policies will have to be updated, so as to appropriately inform data subjects on their current access rights and the available recourse mechanisms. Especially for onward transfers to third party service providers, companies will remain fully liable and will ensure that third parties’ processing data enjoy the same level of protection in case of a transfer from a Privacy Shield company.
  • Clear safeguards and transparency obligations on U.S. government access: The US has given the EU assurance that the access of public authorities for law enforcement and national security is subject to clear limitations, safeguards and oversight mechanisms. Notably, the U.S. Government has ruled out indiscriminate mass surveillance on personal data transferred to the US. The U.S. Secretary of State has established a redress possibility in the area of national intelligence for Europeans through an Ombudsperson mechanism within the Department of State. It is notable that on 24 February 2016, the Judicial Redress Act was signed. The Act permits EU data subjects to seek remedies for violation of their personal data against U.S agencies in U.S courts.
  • Effective redress mechanisms: Any citizen who considers that their data has been misused shall benefit from several dispute resolution mechanisms. Ideally, the complaint shall be resolved by the company itself, or by an Alternative Dispute resolution (ADR) process, the costs thereof shall not be incurred by the concerned individual. Individuals can also go to their national Data Protection Authorities, who will work with the Federal Trade Commission to ensure that complaints by EU citizens are investigated and resolved. If a case is not resolved by any of the other means, as a last resort there will be an arbitration mechanism. Redress possibility in the area of national security for EU citizens’ will be handled by an Ombudsperson independent from the US intelligence services.

For further information, you can have a look at the EU-U.S. Privacy Shield fact-sheet, published by the European Commission in July 2016 [link], as well as the European Commission’s practical guide to the EU-U.S. Privacy Shield  [link].