URL has been copied successfully!
In potential reversal, European authorities say AI can indeed use personal data, without consent, for training
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

In potential reversal, European authorities say AI can indeed use personal data, without consent, for training

The European Data Protection Board (EDPB) issued a wide-ranging report on Wednesday exploring the many complexities and intricacies of modern AI model development. It said that it was open to potentially allowing personal data, without owner’s consent, to train models, as long as the finished application does not reveal any of that private information.This reflects the reality that training data does not necessarily translate into the information eventually delivered to end users.The report was requested by the Irish Data Protection Authority (DPA) “with a view to seeking Europe-wide regulatory harmonisation,” the EDPB said in its statement. The group has now acknowledged that there are nuances to personal data. For example, the EDPB report said, it can make a difference if the personal data had been made publicly available and if “individuals are actually aware that their personal data is online.”Arguably the most significant part of the report acknowledges that sometimes personal data, even when no explicit consent is granted, can still be used in training and be in compliance with the European Union’s (EU’s) GDPR rules, provided that nothing especially sensitive emerges from the answers given by the final product.The report’s regulatory wording says: “If it can be demonstrated that the subsequent operation of the AI model does not entail the processing of personal data, the EDPB considers that the GDPR would not apply. “Hence, the unlawfulness of the initial processing should not impact the subsequent operation of the model. Further, the EDPB considers that, when controllers subsequently process personal data collected during the deployment phase, after the model has been anonymised, the GDPR would apply in relation to these processing operations. In these cases, the Opinion considers that, as regards the GDPR, the lawfulness of the processing carried out in the deployment phase should not be impacted by the unlawfulness of the initial processing.” The full report suggested various questions for companies to ask to determine their generative AI (genAI) compliance exposure. They include:

    “At what stage of the processing operations leading to an AI Model is personal data no longer processed?””How can it be demonstrated that the AI model does not process personal data?””Are there any factors that would cause the operation of the final AI Model to no longer be considered anonymous? If so, how can the measures taken to mitigate, prevent, or safeguard against these factors, so as to ensure the AI Model does not process personal databe demonstrated?”Anonymization, any of several processes to remove or hide sensitive information from end users, is also complex and tricky, the report said. “Claims of an AI model’s anonymity should be assessed by competent [authorities] on a case-by-case basis, since the EDPB considers that AI models trained with personal data cannot, in all cases, be considered anonymous,” the report noted. “For an AI model to be considered anonymous, both the likelihood of direct, including probabilistic, extraction of personal data regarding individuals whose personal data were used to develop the model and the likelihood of obtaining, intentionally or not, such personal data from queries, should be insignificant, taking into account all the means reasonably likely to be used by the controller or another person.”Then it explored the issue of what is in the heads of individuals whose personal information is being used to train the genAI models.

    Considerations for CIOs

    It said CIOs must consider “the role of data subjects’ reasonable expectations in the balancing test. This can be important due to the complexity of the technologies used in AI models and the fact that it may be difficult for data subjects to understand the variety of their potential uses, as well as the different processing activities involved. In this regard, both the information provided to data subjects and the context of the processing may be among the elements to be considered to assess whether data subjects can reasonably expect their personal data to be processed.”Among the considerations, the report said, would be:

      Whether or not the personal data was publicly available.The nature of the relationship between the data subject and the controller, and whether a link exists between the two.The nature of the service.The context in which the personal data was collected.The source from which the data was collected (for example, the website or service where the personal data was collected and the privacy settings they offer).The potential further uses of the model, and whether data subjects are actually aware that their personal data is online at all.The report also explored whether the data being used for genAI can be defined as a “legitimate interest.”Is there, it asked, “no less intrusive way of pursuing this interest? When assessing whether the condition of necessity is met, SAs [companies] should pay particular attention to the amount of personal data processed and whether it is proportionate to pursue the legitimate interest at stake, also in light of the data minimization principle.”

      Stop looking for excuses

      Various European business people reacted to the report in social media forums, including on LinkedIn. Peter Craddock, a partner in the Brussels law firm Keller and Heckman, said the report doesn’t explore what makes data personal, and simply assumes that all data is indeed personal until proven otherwise. “Nowhere does the EDPB seem to look at whether something is actually personal data for the AI model provider. It always presumes that it is, and only looks at whether anonymization has taken place and is sufficient,” Craddock wrote. “If insufficient, the SA would be in a position to consider that the controller has failed to meet its accountability obligations under Article 5(2) GDPR.”And in a comment on LinkedIn that mostly supported the standards group’s efforts, Patrick Rankine, the CIO of UK AI vendor Aiphoria, said that IT leaders should stop complaining and up their AI game.”For AI developers, this means that claims of anonymity should be substantiated with evidence, including the implementation of technical and organizational measures to prevent re-identification,” he wrote, noting that he agrees 100% with this sentiment. “This is not that hard, and tech companies need to stop being so lazy and looking for excuses. They want to do great things building tech, but then can’t be bothered treating the data they need for their great tech respectfully or responsibly.”He scoffed at other comments suggesting that these suggested rules would encourage high-tech firms to leave Europe.”You can’t be serious,” Rankine said. “It’s completely possible to protect the data of individuals and have a thorough set of useful data to train with. It requires a bit of effort and consideration. I think the legitimate interest clause needs an overhaul anyway. It’s becoming rife everywhere and is definitely being used for non-legitimate means. I’ve seen entire websites harvesting personal data and then claiming it’s operating on the basis of legitimate interest.”

      First seen on csoonline.com

      Jump to article: www.csoonline.com/article/3628060/in-potential-reversal-european-authorities-say-ai-can-indeed-use-personal-data-without-consent-for-training.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link