Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

Artificial Intelligence (AI) dominated headlines last year. This year we’ve seen the introduction of the AI Act, the world’s first all-encompassing AI law. The proposed legislation aims to regulate this transformative technology. AI is not a new concept. However, its use and presence recently has been boosted by orders of magnitude because of the usability, ease of access and low cost of Generative AI. EU policy makers, like when the General Data Protection Regulation (GDPR) was first introduced nearly 6 years ago, want to be at the forefront of regulating this technology. As a result, the proposed AI Act is being touted as equally if not more impactful than the GDPR. Despite this regulatory burden, we see clients continue to allocate sufficient budgets to invest in and adopt AI so that they can remain competitive and increase efficiencies. For the majority, they accept the privacy focused culture of the EU, and are not put off from doing business in the EU.

The key takeaway from our Technology and Digital Disruption panel at our 2024 Annual Technology Conference was that organisations need to develop robust and appropriate governance structures before deploying AI internally or externally. AI governance is a practical means for helping organisations discharge their obligations under both the AI Act and the GDPR. In implementing an appropriate AI framework, organisations should look to what the regulators are saying about the adoption of AI. The panel discussed how privacy regulators, in particular, focus on the importance of being transparent and accountable. In order to demonstrate compliance with these principles, organisations must understand:

  • What AI systems they are utilising
  • What data is being used within these systems, and
  • The risks with deploying each AI system

Compliance with the AI Act is not a role that should be assigned to a single individual within an organisation. Due to the nature of the technology, the law (AI Act) necessarily overlaps with other legislation such as the GDPR, the Medical Devices Regulation and the Digital Services Act as well as other legal risks like cybersecurity. Our panel of experts recommended a cross-departmental and cross functional approach to onboarding and managing AI in an organisation. Depending on the organisations size it might make sense to create an AI committee. This committee of experts should:

  • Complete appropriate risk assessments
  • Establish AI guidelines for use, and
  • Create an AI use / development policy

The adoption of these measures sets a foundation for compliance with the AI Act which can be developed according to the risks associated with each new AI system developed or adopted.

Our panel of experts also discussed the role standards will play in the development and adoption of AI systems. It is expected that a draft of the EU Commission mandated standards under the AI Act will be available at some time next year. Those standards could prove to be a very valuable and practical guide for organisations looking to create or use high-risk AI systems.

In summary, the panel’s view is that AI should be embraced given its potential to create significant value and efficiencies. As part of that outlook, compliance efforts should be adopted and prioritised. While regulation equals costs and resource drain, the costs don’t outweigh the advantages. Failure to comply could lead to unusable technology and significant costs by way of fines under the AI Act.

Panel 1 Summary -Fintech: Money Meets Machines
Panel 2 Summary - Going Global – Scaling your Technology Company

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: