As we advance into 2024, the shaping of artificial intelligence (AI) regulations and laws is beginning to take center stage in much legal discourse. This is presenting certain challenges to legislators, and to businesses as they continue to adapt to new governance landscapes. The AI arena is evolving rapidly, coloured by distinct regulatory philosophies in the European Union, the United Kingdom, China, and the United States, representing a fragmented and complex global regulatory landscape. As ever, there is tension between the direction of travel of the regulators and the entrepreneurs seeking to bring their technology to the global masses.
As we near the end of 2023 however, from a regulatory perspective at least, all eyes are on the AI Act; when will it be agreed and what will the regulation of large language models (LLMs), and foundation models look like? Based on recent UK announcements, it’s clear that they are now looking more closely at also regulating AI sooner, rather than later. The experience of the EU AI Act project will likely influence theirs and the US AI regulatory journey to a great extent.
Key developments in 2023
2023 has been action-packed from an AI regulation perspective. There has been a long list of developments in this regard, not least including:
- The UK AI White Paper
- The EU Parliament’s draft of the AI Act, with the first mention of foundation models and generative AI
- The progression of AI standards
- The Trilogue negotiations
- The US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
- The London AI Safety Summit
- Major industry statements calling for a pause on the development of foundation models
- Calls for more and tougher regulation of AI, and
- Calls for less and no regulation of AI for fear of over-regulation, regulating too soon and stifling innovation
UK AI White Paper
The UK published its White Paper on AI - a pro-innovation approach to AI regulation in March this year. Like other territories grappling with regulating AI, they pointed out the link between trustworthy AI and the need for some form of regulation. “Public trust in AI will be undermined unless these risks, and wider concerns about the potential for bias and discrimination, are addressed. By building trust, we can accelerate the adoption of AI across the UK to maximise the economic and social benefits that the technology can deliver, while attracting investment and stimulating the creation of high-skilled AI jobs.”
To marry a regulatory approach founded on innovation they specifically shied away from legislation and opted instead for a softer principle-based framework. However, in a curious turn of events in late November 2023, the House of Lords published the Artificial Intelligence (Regulation) Bill, which appears to be a framework for an AI law based on internationally recognised trustworthy AI principles. It will be interesting to see how tension will play out in 2024 between the UK government’s preference for a ‘light-touch’, pro-innovation approach to regulation, and the Lord’s apparent direct path to AI legislation.
The US Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
While the EU was trying to finalise the AI Act under the Trilogue negotiations, President Joe Biden issued an executive order on Safe, Secure, and Trustworthy Artificial Intelligence to advance a coordinated, federal government-wide approach to the safe and responsible development of AI. The Order reflects the Biden administration’s desire to make AI more secure and to cement US leadership in global AI policy. The fact that it was published only days before the London AI Safety Summit undermined to an extent the global safe AI platform the UK was constructing for itself.
These major developments on the AI regulatory front seem to be the only stories with the clout to puncture the ever-present wall of AI lobbying reported in the media. These lobbying efforts persist in an effort to highlight the perils of regulating AI at this early stage on one side, and the weakness of the regulatory proposals on the other. Little or no attention is given to the equally important issue of AI standards. Indeed, in the eyes of companies facing compliance with anticipated laws like the EU AI Act, the standards setting work being done by CEN / CENELEC on drafting the standards following the standardisation request from the European Commission, is more important. Companies will ultimately rely on these standards to help them operationalise trustworthy AI in compliance with the EU AI Act.
2023 will principally be remembered as the year of foundation models and generative AI. It will also be remembered for the significant advances in AI law, AI safety and responsible AI. This is demonstrated by the international race in 2023 by governments to set out their respective stalls on AI regulation.
As we face into 2024, however, it is becoming apparent that while these governments have distinct AI regulatory philosophies, they do appear to be coalescing around more concrete proposals for regulating AI as opposed to steadfastly clinging to an exclusive light-touch approach to regulation. Could it be that the EU got this one right from the start? Bear in mind that the EU AI legislation project first kicked off in June 2018 with the first convening of the High Level Expert Group that would go on to produce the policy recommendations that would form the basis for the AI Act. That five-year head start may prove to be crucial in 2024.
As a final note, stakeholders are urged not to forget about AI standards and the anticipated CEN / CENELEC publication possibly as early as the end of 2024. 2023 was all about the technology, 2024 will likely focus on regulating the technology.
For more information and expert advice, contact a member of our Artificial Intelligence team.
People also ask
What is the Oireachtas Joint Committee?
The Joint Oireachtas Committee on Enterprise, Trade and Employment is a committee of the Houses of the Oireachtas consisting of members of all parliamentary parties. The Committee recently published its Report on Artificial Intelligence in the Workplace.
What is AI in the workplace?
Workplace AI includes any form of monitoring or surveillance of employees, whether this be in respect of the hiring, management or dismissal of an employee. The Joint Oireachtas Committee on Enterprise, Trade and Employment recently considered the promises and perils of workplace AI in Ireland.
What are the problems with AI in the workplace?
The Joint Oireachtas Committee on Enterprise, Trade & Employment recently considered some of the potential risks associated with workplace AI in Ireland. These include privacy infringements, violations of workers’ fundamental rights, as well as potentially affecting their health and safety, such as through increased stress at work.