Internet Explorer 11 (IE11) is not supported. For the best experience please open using Chrome, Firefox, Safari or MS Edge

AI in EU Securities Markets

Our Artificial Intelligence team sets out the key aspects of ESMA's recent article which provides an overview of AI use across securities markets in the EU and assesses the extent of the adoption of AI-based tools.


The use of artificial intelligence (AI) in finance is under increasing scrutiny from regulators and supervisors interested in examining its development and the related potential risks. ESMA recently published an article on Artificial intelligence in EU securities markets which provides an overview of AI use across securities markets in the EU and assesses the extent of the adoption of AI-based tools.

The ESMA Article explores the common applications of AI by entities operating in different sectors of the EU securities markets and assesses the prospects for increasing uptake of AI in these sectors, together with the associated risks and challenges. By providing a better view of the market, the aim of the ESMA Article is to inform regulatory and supervisory prioritisation in the sphere and facilitate an understanding of the interplay between industry practices and the regulatory framework.

The Article notes that the scale at which AI can be used, the speed at which AI systems operate, and the complexity of the underlying models may pose challenges to market participants intending to use them and to their supervisors. As a consequence the use of AI in finance is under increasing scrutiny from regulators, who are beginning to develop AI-specific governance principles or guidance for financial firms.

In one of the major international efforts in this regard, the Commission presented its AI package in April 2021, including a proposal for a regulation laying down harmonised rules on AI (the ‘AI Act’) and a related impact assessment (European Commission, 2021). The AI Act is a cross-sectoral legislative proposal with the objective of ensuring the trustworthiness of AI following a risk-based approach: for riskier AI applications, stricter rules apply.

Interesting take-aways

The Article outlines how players in the asset management space are leveraging AI tools, how AI is used over the life cycle of trading and speaks to the AI tools used by credit agencies and proxy advisory firms. Some interesting take aways include:

  • Portfolio managers are using of a variety of AI tools to enhance fundamental analysis and by quant funds as part of systematic investment strategies. However, the use of these tools does not seem to be transforming portfolio manager’s practices in a disruptive or revolutionary fashion. AI can also be used as the backbone of portfolio risk management models and in early warning systems to predict market volatility and financial crisis.
  • Investment funds using AI are still constrained by not only technological and knowledge barriers, especially amongst smaller asset managers but also mixed feedback from clients. AI lacks widespread acceptance in this sector, the perceived risks of black boxes and the challenge of explaining negative outcomes may deter certain investors. Indeed most investment funds do not explicitly advertise the use of AI, with less than 0.2% of UCITS in the EU and around 0.03% of UCITS’ assets under management disclosing the use of AI.
  • In the trading lifecycle, AI is also delivering concrete benefits, from the execution of trading orders to post-trade processes. In execution AI allows brokers and large institutional investors to minimise the market impact of large orders by determining how to optimally split them between venues and periods. In post-trade processes, AI is used by some central securities depositaries (CSDs) and brokers to predict the probability of settlement failures. However, ESMA’s surveys suggest that most are not currently relying on AI models.

Potential risks

The Article notes the main risks that AI entails in the context of the securities markets as follows:

  • Explainability
  • Concentration, interconnectedness and systemic risk
  • Algorithmic bias
  • Operational risk, and
  • Data quality and model risk

It also notes that most of the risks are not inherent to the AI models and algorithms but can be amplified when using AI, as AI systems typically operate at greater scale, complexity, and automation than traditional statistical models.

The AI Act and regulation of AI

The findings and risks identified in the Article somewhat mirror a common critical observation of the path of regulation of AI in the EU. Many argue this technology is nascent and should be allowed to develop further before curbing it with the broad scope of the AI Act. There are genuine and reasonable concerns that the development of AI in the EU, especially in the start up/scale up Fintech space, could be significantly hampered and even set back a number of years by the introduction of the AI Act. This is especially the case when compared to the US. Fintech AI will most likely fall into the category of “high-risk” AI and the cost of compliance from both a financial and resource perspective may prove prohibitive for many tech companies in this space. On the other hand, the risks identified in the Article are the very risks which the AI Act is seeking to address for the purpose of building an EU eco-system of trustworthy AI. The EU is, to an extent, playing the long game with trust, but what will be the short or medium term impact on innovation?

One way or another, this Article points to significant use of AI in the financial sector, and those creating or deploying in this space need to prepare for its regulation under the AI Act which is expected to be signed into law later this year.

Comment

Overall, although market participants increasingly use AI to support certain activities and optimise specific phases of their business, this does not seem to be leading to a fast and disruptive overhaul of business processes. This is due to a variety of factors, among which not only technological constraints, but also clients’ preferences and regulatory uncertainty play a role. Against this background risks related to the use of AI in securities markets are material but appear to be still limited. However, as the use of AI begins to become increasingly embedded in the activities of these market participants, the risks associated with AI will only increase. The AI Act will likely be introduced later this year and will specifically deal with these risks. Companies creating or deploying AI in this space should begin to look at what their own obligations may be when using this technology.

Please contact a member of our Fintech or Artificial Intelligence teams for more details.

The content of this article is provided for information purposes only and does not constitute legal or other advice.



Share this: