Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Rise of the (Helpful) Machines

Agentic AI systems are expected to become commonplace in our personal and working lives in the coming years. These systems are envisioned to function as intelligent assistants and will be capable of autonomously managing both everyday and, indeed, complex task with minimal human supervision.

However, along with this very autonomy and capacity to gather and process vast amounts of personal information will also come significant risks, including legal risks for Agentic AI providers and deployers.

Our Data & Technology team looks at how these systems are expected to operate based on current development trends. They also provide a high-level overview of some key privacy and AI Act related considerations that these new personal machine assistants will throw up for both operators and users in the space.


What is Agentic AI?

AI agents are autonomous systems that can be built on top of generative AI large language models, or “LLMs”. The agents can operate independently and are designed to achieve specific objectives by orchestrating multiple actions in sequence. They can also incorporate user feedback and can be fine-tuned to hone their actions or responses over time.

In the coming years, AI agents are expected to become commonplace in our everyday personal and working lives as the next frontier of artificial intelligence products. These systems are expected to be capable of autonomously managing both everyday tasks like booking restaurants or football match tickets, and complex tasks like personalised customer service, software development and healthcare interactions, with minimal human supervision.

How do AI agents work?

Technology related to AI agents is still rapidly developing, and different AI developers and academics have their own distinct interpretations of what an AI agent is. However, some of the most common traits associated with agentic AI at this stage involve the AI agent being able to:

  • Perceive: The AI agent will collect and process data, i.e. text, voice or other data from a user making a request, from different sources, such as sensors, databases, and digital interfaces for the more advanced agents. The agent will format and structure this data to allow it to reason and plan an action.
  • Reason: Underpinned by an LLM, the AI agent will subsequently seek to understand tasks and generate potential solutions based on the task request from the user.
  • Plan: The planning stage involves the AI agent executing the potential solutions identified in the reasoning step described above. At this point, the agent will organise and sequence actions to achieve a defined goal.
  • Memorise: Many AI agents will keep track of past interactions with their users. This form of memory allows an AI agent to store and retrieve context, both within a single interaction or across multiple sessions.
  • Act: Following the steps above, the AI agent will execute the plan and interact with the external environment, for example, by contacting a restaurant to make a reservation or discussing an issue with a customer via a chatbot interface. To achieve a specific task, the AI agent must have access to a defined set of tools, such as external systems or a booking interface, which it can use to accomplish the specific task.
  • Learn: The feedback loop enables the agent to evaluate the success of its actions and adjust its future behaviour dynamically. To do this, the agent will learn from user feedback and refine, and hopefully improve, its reasoning, planning, and execution over time.

What are some privacy risks associated with AI agents?

AI agents, with their ability to proactively learn, reason, and take autonomous actions for their users, will undoubtedly be powerful and helpful tools in both the workplace and in everyday life. However, it is very likely that this very autonomy and capacity to gather and process vast amounts of personal information about people will also pose significant privacy risks.

The GDPR does not reference agentic AI. However, the Regulation is designed to be technology-neutral and future-proofed, so the processing of personal data in the context of agentic AI will indeed fall within scope. Some GDPR concepts and principles that will be particularly important to consider include:

Automated decision making

Article 22 of the GDPR grants data subjects the right “not to be subject to a decision based solely on automated processing… which produces legal effects concerning him or her or similarly significantly affects him or her.”

Depending on the context in which an AI agent is deployed, it is possible that an AI agent’s actions may trigger this prohibition. For example, an AI agent may make a decision which produces a “legal effect” if it is involved in the cancellation of a contract or approval or denial of a visa. A “similarly significant” effect could result where an AI agent makes a decision related to a loan or employment application.

Where an AI Agent may be impacted by Article 22, the deployer of the agent will need to consider whether:

  • Any of the exemptions under Article 22(2) GDPR apply, i.e. where the AI agent’s processing is necessary for entering into or performing a contract, where the processing is authorised under EU or Member State law or where the processing is based on explicit consent
  • Meaningful human review, i.e. a “human in the loop”, can be added to the decision-making chain, and
  • An individual can contest a decision where an exemption is relied upon.

Controllership and data processing roles

A novel question under the GDPR which AI agents will raise is who will be designated as a controller or processor for processing operation decisions taken by the agent. In order to be considered a “controller” under the GDPR, an individual or entity must - alone or jointly with others - determine the purposes and means of the processing of personal data.

Organisations who develop and deploy agentic AI will need to consider how they can implement and maintain influence over the parameters of the AI agent in order to be able to evidence controllership.

Transparency

For agentic AI, compliance with the GDPR’s transparency obligations will raise unique challenges. For example, Article 13(1)(c) of the GDPR requires controllers to explain to data subjects how their information will be collected and processed. Any explanation will need to include the purposes and legal bases for associated data processing.

As more sophisticated AI agents could make dynamic decisions to change their behaviour over time, this could result in them processing personal data for a new purpose not originally intended. In light of this, any data controller of an agentic AI system will need to be alive to and be able to regularly update its external-facing transparency information, including its privacy policy, to ensure they remain compliant with these important GDPR obligations.

Purpose limitation and data minimisation

As outlined, agentic AI systems rely on real-time learning and continuous data ingestion and analysis to action tasks. Given its dynamic and ever-changing actions, this behaviour could clash against the GDPR’s purpose limitation and data minimisation principles. “Purpose limitation” requires personal data to be collected for “specified, explicit and legitimate purposes”, while “data minimisation” requires personal data to be kept in a form which is “adequate, relevant and limited to what is necessary” for the purposes that they are processed.

In light of these GDPR obligations, controllers of agentic AI systems will need to ensure that the operation and parameters of the underlying models can be both controlled and defined. Controllers will also need to ensure that data minimisation safeguards, such as time limits for erasure and de-identification measures, are considered.

Compliance documentation

As agentic AI systems may process vast quantities of personal data, potentially including sensitive information, to learn, make decisions, and take actions, controllers will need to consider if the applicable processing reaches the threshold of likely being “high risk” under Article 35(1) of the GDPR. If this is the case, a DPIA will need to be carried out.

In addition, where controllers are relying on Article 6(1)(f) (legitimate interests) to process personal data collected through an AI agent, which may be the most feasible legal basis to process third-party personal data, a Legitimate Interest Assessment should be carried out.

How might the AI Act apply to AI agents?

Like the GDPR, the concept of "Agentic AI" does not appear in the EU’s AI Act. However, similar to the GDPR, the legislation is designed to be technology-neutral and future-proofed, which means its impact on agentic AI systems needs to be considered.

The EU AI Act adopts a risk-based approach to regulating AI in the EU. Depending on the context of how it is deployed, an AI agent could fall into the "high-risk" or potentially even a "prohibited" AI practice category.

Article 5 – Prohibited risk: An agentic AI system that manipulates behaviour in a subliminal manner, or which seeks to exploit the vulnerability of natural persons, including due to their age, disability or a specific social or economic situation, to materially distort their behaviour will be considered prohibited.

Article 6 - high risk: Agentic AI uses in Annex III use cases, which include things like recruitment, credit scoring, biometric identification, education, and law enforcement, or those Annex I systems integrated in physical products, where those products are already regulated under existing EU product safety regimes, e.g. the Medical Devices Regulation, etc.

For “High Risk” agentic AI, providers and deployers of agentic AI must take steps, depending on their roles, such as:

  • Providers conducting a pre-market conformity assessment and putting in place a post-market monitoring system
  • Maintaining technical documentation and logs of decisions made by the AI agent (Articles 18 and 19)
  • Ensuring human oversight and robustness (Article 14)
  • Maintaining an effective risk management system (Article 9)

Comment

The development of agentic AI is an exciting new frontier in artificial intelligence. However, like other stages of the AI journey, like algorithms and LLMs, this iteration of the technology will be impacted by EU regulation, which will need to be front of mind for organisations who develop and release those systems.

For more information on the data protection implications of this technology, please contact a member of our Data & Technology team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: