Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Brian McElligott and Conor Califf explore the risks and safeguards when engaging vendors in the EU for AI-powered services, covering data security, legal compliance, transparency, child data processing, the EU AI Act, and intellectual property concerns. Understanding these dynamics is crucial for businesses in the evolving AI landscape, ensuring responsible and compliant vendor partnerships. This article was first published by OneTrust.

Artificial intelligence (AI), or AI system in the context of proposed EU regulations, has revolutionized the operations of businesses in nearly every industry, including the legal field. The use of AI in consumer businesses, professional services, and product development is constantly advancing, with notable progress seemingly occurring daily. From manufacturing to marketing, retail to healthcare, AI plays an increasingly significant role in shaping business practices and is beginning to influence entire industries operations.

There are several benefits to utilizing AI within your business or organization, including increased efficiency and productivity, reduced costs, and improved decision-making. However, while AI can be a powerful asset, it can also present challenges to your organization's data privacy and security efforts, especially as AI technology and related laws continue to develop rapidly.

For instance, the European Union's Trilogue process, involving negotiations between the EU Parliament, the EU Council of Ministers, and the European Commission, is currently underway regarding the EU's Artificial Intelligence Act (AI Act), which is expected to be passed into law in Q4 2023.

One area to consider, in particular, is the potential risks that arise from partnering with third-party vendors (vendors) who offer services relying on or deploying AI systems. While integrating the services provided by these vendors, such as large language model AI-powered chatbots for efficient customer support, can bring immense benefits to your business, it is crucial to ensure that the practices of the vendors you engage with are fully compliant with all applicable privacy, data security, and other relevant laws.

With this in mind, this article aims to address two main aspects:

  • identifying some of the key risks that companies should consider when engaging with AI vendors in the EU; and
  • providing practical guidance on how to assess and ensure that vendors using AI are fully compliant with the necessary privacy, security, and anticipated AI laws in the EU.

Some of these risks and potential safeguards (non-exhaustive) include the following:

Security of Data

Risk

Depending on its use case, an AI-powered service offered by a vendor will often require access to large amounts of personal data to function effectively. If you share data with vendors, this data might include customer personal data, potentially sensitive customer personal data, or proprietary commercial or business data. If the vendor does not have robust internal data privacy and security measures in place, there will be a risk of data breaches, unauthorized access, or misuse of data, which could lead to legal and reputational consequences for your business or organization.

Failure to comply with applicable General Data Protection Regulation (GDPR) data security obligations (e.g., Article 32 of the GDPR) can attract significant fines (e.g., fines up to €10 million or, in the case of an undertaking, up to 2% of the total worldwide annual turnover of the preceding financial year).

Safeguards

Ensuring the security of any personal data shared with vendors is a key consideration. Some practical steps that businesses can take include:

Vendor risk assessment:

Conducting a vendor risk assessment is a primary data security measure that can help protect data shared with the vendor. This assessment should include a questionnaire that evaluates the vendor's privacy and security policies, and any sub-processors that they will engage. Specifically, the questionnaire should ensure that the vendor's practices and policies align with your organization's own policies, especially what types of sub-processors may process the data collected. Additionally, the questionnaire/assessment should inquire about the data the vendor will collect, how they will process it, and how long they will retain it.

Contract:

Negotiating a contract with the vendor that includes clear privacy and security requirements is a key protection measure. The contract should also incorporate provisions for auditing the vendor's compliance with the relevant contractual provisions.

Monitoring the vendor's compliance efforts on an ongoing basis:

Within the aforementioned contract, sufficient clauses should be in place to allow your organization to monitor the vendor's compliance efforts effectively. These may include granting you rights to review the vendor's privacy and security policies, conduct audits, and respond to security incidents as needed.

Pseudonymization or anonymization:

Another approach to de-risk is to ensure that any data shared is effectively anonymized before being used in an AI model by a vendor. Recital 26 of the GDPR outlines that the GDPR does not apply to anonymous data. However, it is essential for your organization to carefully consider whether any data sets shared meet the high standard of anonymization as defined by the GDPR. If data has been pseudonymized rather than anonymized, all applicable GDPR obligations will still be in effect, though it can still serve as a helpful security measure.

Furthermore, if your intention is to anonymize or pseudonymize the data before sharing it with a vendor, consider including a clause in the data-sharing agreement that obligates the vendor not to attempt to re-identify data subjects in the data set under any circumstances.

DPIA:

Finally, your organization should consider whether a Data Protection Impact Assessment (DPIA) should be carried out to review and address risks caused by the processing of personal data. One crucial element required in any DPIA is to document "the measures envisaged to address the risks, including safeguards, security measures, and mechanisms to ensure the protection of personal data and to demonstrate compliance with [the GDPR] taking into account the rights and legitimate interests of data subjects and other persons concerned." Where the AI vendor is a processor, they are required to assist with completing a DPIA.

Data processing role

Risk

In addition to the aforementioned data security risks, organizations will need to define the specific data processing relationship with the vendor. Vendors often attempt to categorize themselves as the data processor while positioning the engaging organization as a data controller. This presents a risk for organizations because, under the GDPR, a data controller retains liability for the actions and omissions of its processor, unless the processor has acted outside or contrary to the lawful instructions of the controller. Furthermore, the controller organization must ensure that an appropriate Article 28 of the GDPR data processing agreement is established with the vendor acting as a processor.

This assessment is particularly relevant for AI systems trained on or using datasets containing material scraped from the internet. The AI vendor will be the controller of those datasets by virtue of training its AI on that personal data. Customers will need to make sure that the AI vendor has the appropriate legal bases for processing personal data and the appropriate transparency disclosures. For example, the French supervisory authority has recently fined Clearview AI for failure to have a legal basis or proper transparency disclosures when scraping personal data.

Safeguards

Carrying out a careful controllership analysis before the processing begins is essential. Once an appropriate data processing relationship concerning sharing with the vendor has been established, you must determine the suitable data processing agreement to be entered into (e.g. if a controller-processor relationship is established, an Article 28 Agreement should be considered). It is crucial to carefully review any such agreement with appropriate legal counsel.

Establishing a legal basis for the processing

Risk

If your organization is positioned as a data controller for processing involving the AI system, you will need to establish a legal basis under Article 6(1) of the GDPR for the processing facilitated by the vendor's service. Establishing a legal basis depends on identifying the purpose of data processing when utilizing the vendor's service.

This will require careful case-by-case analysis, as certain legal bases within Article 6 of the GDPR will be more suitable in specific scenarios. For instance, if the vendor's AI system is used to support decision-making concerning employees, relying on consent under Article 6(1)(a) of the GDPR may be difficult as data protection regulators presume that employee consent is invalid due to the perceived power imbalance between parties. Failure to establish an appropriate legal basis under the GDPR can result in substantial fines, such as up to €20 million or for undertakings, up to 4% of the annual turnover.

Safeguards

A business should determine what legal basis is the appropriate one to carry out the vendor’s services and ensure that this is reflected in any privacy policy. This will be important where the business needs to defend this legal basis either to a supervisory authority, following questions from a data subject, or in court. Where a business is considering that a particular legal basis is the appropriate one, then it should be prepared to more robustly defend this.

Transparency

Risk

Just as with any engagement of service providers, ensuring transparency for your employees or customers regarding data sharing with the vendor is crucial. The recent decision made by the Irish Data Protection Commission (DPC) regarding a messaging company's transparency practices has set the regulatory standard for furnishing necessary transparency details within the EEA.

In the transparency decision, the DPC outlined that, in compliance with Article 13(1)(e) of the GDPR, controllers are obligated to offer data subjects with information about:

  • named recipients or categories of recipients of personal data;
  • categories of personal data received by each recipient category;
  • the industry or sector to which the service provider belongs;
  • the location of the recipient; and
  • a brief description of the services to help users understand the purpose of transferring their personal data.

Transparency has emerged as an area of intense regulatory activity within the EU. Failure to provide adequate transparency, especially for processing related to innovative technologies like AI, can attract regulatory scrutiny.

Safeguards

The primary measure to consider when partnering with vendors is to ensure clarity within your organization's privacy policy regarding the sharing of personal data with vendors, aligning with the standards set forth in the transparency decision mentioned earlier. This entails furnishing details about the categories of personal data shared with vendors and the specific purposes for which such data is shared. This is where doing the initial onboarding assessment with the AI vendor correctly will be key.

Additionally, you should contemplate other transparency measures that can be extended to customers or employees regarding your use of AI in your product or service. For instance, Help Centre articles that are linked in the privacy policy or in-product notices.

Processing of children's data

Risk

The use of children's data has attracted significant regulatory attention in recent years, with multiple EU regulators signaling their intention to maintain focus on this area during enforcement activities. Specifically concerning AI, the Italian data protection authority (the Garante) issued enforcement measures against OpenAI in early 2023, mandating the implementation of more robust age-gating measures to prevent children from accessing ChatGPT.

In light of this, if your organization engages in AI-related processing (including through vendor-provided AI systems) it must carefully consider the chosen legal basis and also incorporate child-specific safeguards as part of the AI-powered processing.

Safeguard

Initially, it is essential to consider, whether you want to make your service available to children and where you do not intend to have it be used by children, if you can implement robust age verification measures in line with the latest children’s data processing guidance. Where the AI system is made available to children, based on the vendor impact assessment conducted earlier, you should determine whether the processing taking place as a result of the vendor's AI system might have a substantial influence on the risks associated with children's rights. If children’s data is processed by any vendor-powered AI system, your organization should consider carrying out and documenting a "best interests of the child" assessment. This assessment will allow your organization to evidence that it is in the best interests of the child to use children's data for the purposes powered by the vendor's AI System. You can also consider building child-specific controls and transparency tailored to children's needs.

AI Act

Risks

Apart from privacy and security, significant new regulations under the EU AI Act will also shortly apply to impact vendor relationships. The scope of application of the AI Act will be extensive, encompassing the deployment or usage of qualifying AI systems within the EU, irrespective of the provider's location or group structure. Two key factors that will significantly impact organizational risk are:

  • determining whether your vendor acts as a provider or deployer of AI; and
  • evaluating whether the AI procured falls into the high-risk category.

Is your organization a deployer or provider?

The AI Act introduces a groundbreaking approach, treating certain uses of AI systems similar to hardware, thus imposing pre- and post-market conformity assessment requirements akin to the structure of EU product safety and medical device regulations. The majority of these obligations under the Act primarily apply to the creator, manufacturer, or provider of the AI system. A somewhat comparable framework system applies to Generative AI providers, where compliance obligations are demanding, although without a formal conformity assessment regime in place.

Users, or deployers as they are known under the AI Act, also have obligations with regard to using high-risk and non-high-risk AI systems. This includes ensuring the presence of a suitably qualified human in or on the loop in relation to the AI system. In addition, users or deployers need to be aware of the scope of the provider's or deployer's obligations outlined in the AI Act in order to manage the scope of compliance and mitigate liability risk under the Act. It is essential to recognize that your vendor may be a user or deployer themselves, or they might be a hybrid provider/deployer, especially in the case of Generative AI. To minimize risks within your contractual relationship with them, it's imperative to understand their role and the corresponding compliance obligations they hold.

High-risk AI systems (including recommender systems of VLOPs)

As previously stated, significant compliance obligations apply to providers of high-risk AI systems, along with heightened risk and compliance obligations for users or deployers of this AI. High-risk applications encompass areas like recruitment, employment, HR, insurance, fintech, and energy.

It's also worth noting that the European Parliament's recent amendment to the AI Act classifies recommender systems of Very Large Online Platforms (VLOPs) as a high-risk AI system, subjecting them to the most stringent level of compliance obligations.

Fundamental rights impact assessment

Where your organization is deploying high-risk AI, the latest draft of the AI Act has also introduced a completely new requirement that users or deployers of high-risk AI solutions must conduct a "fundamental rights impact assessment," which involves evaluating the potential negative impact on the environment and marginalized groups.

Safeguards

Pending the adoption of the AI Act, users and deployers are advised to address these emerging risks by incorporating suitable contractual terms when procuring AI, including potentially high-risk AI. This proactive approach may involve incorporating warranty and indemnity protections with the contracts.

Organizations will also need to carefully consider what AI Act-related obligations will be applicable depending on their role, such as deployers of high-risk AI solutions having to conduct a "fundamental rights impact assessment."

Intellectual property concerns

Risk

When entering into a partnership with a vendor offering AI-powered services, there is a possibility that the vendor might have access to your proprietary data, intellectual property, models, or algorithms. This scenario raises concerns regarding the protection of your intellectual property (IP) and trade secrets, as well as the potential misuse of these valuable assets.

Safeguard

IP protection clauses incorporated into agreements with a vendor can serve as a crucial safeguard to protect your business's proprietary information, data, algorithms, and other IP rights from being misused, shared, or exploited by the vendor or any third parties. Specialized IP counsel should be engaged when implementing these clauses, especially in light of the unique risks presented by AI and Generative AI in particular.

Bias and fairness issues

Risk

Lastly, it is important to always recognize that AI systems reflect the biases present in the data used for their training and testing. If the vendor's AI algorithms driving their systems contain built-in biases, it may lead to discriminatory or unfair outcomes that impact specific groups among your customers or employees. Depending on the use case of the AI system provided by the vendor, this can harm the reputation of your business and may also result in legal challenges. Depending on how the AI system is used or the type of service that uses it, it can also be subject to obligations under the Digital Services Act.

Safeguards

Organizations should review outputs delivered by vendors on an ongoing basis and be prepared to terminate a vendor relationship if harm is being caused by the AI system's outputs. Additionally, you can refer to the information mentioned above regarding contractual safeguards.

Conclusion

Engaging with vendors and providers of AI systems introduces risks that derive from the innovative nature of these systems. Those risks are now primarily in the privacy, security, and IP concerns, with a specific focus on the EU's supervisory authorities' perspectives, especially regarding the use of Generative AI.

Additional and new types of risks will arise on the adoption of the AI Act, some of which could lead to new resource allocations for users and deployers of AI. This may include expertise related to AI, fundamental rights impact assessments, and ensuring a human presence in and on the loop. Existing risks need to be addressed prior to procuring this technology, and with the imminent adoption of the AI Act, expected in Q4 2023, preparations should commence to manage new compliance requirements and the related liability risks that come with it.

For more information, please contact a member of our Artificial Intelligence team.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: