Sheffield United were cruelly denied a clear goal against Aston Villa in the first game of the Premier League’s return post the Covid hiatus. The referee relied on Hawk-Eye Goal Line Technology which failed to spot the ball crossing the line and no goal was allowed. We take a look at how a technology howler like this might lead to issues for Hawk-Eye under proposed new EU AI laws.
Blades of Glory – No such luck
Sheffield United appeared to have opened the scoring just before half-time when the Aston Villa goalkeeper clearly carried the ball over the line as he tried to intercept a free kick under pressure from the Blades’ attack. To the astonishment of the attacking team, the referee waved play on because he received no goal alert from either his watch or earpiece, which are supposed to inform him if the ball has crossed the line.
Later that evening, Hawke-Eye posted a statement on Twitter owning up to a technical fault and apologising unreservedly to all involved. It provided the following explanation:
“The seven cameras located in the stands around the goal area were significantly occluded by the goalkeeper, defender and goalpost. This level of occlusion has never been seen before in over 9,000 matches that the Hawk-Eye Goal Line Technology system has been in operation.
The system was tested and proved functional prior to the start of the match in accordance with the IFAB Laws of the Game and confirmed as working by the match officials. The system has remained functional throughout. Hawk-Eye apologises unreservedly to the Premier League, Sheffield United and everyone affected by this incident.”
Proposed EU AI Regulation
Hawke-Eye is primarily a computer vision company and it deploys artificial intelligence tools like machine learning and deep learning to create and continuously develop its technology. The issue of the regulation of AI tools like this is very much to the fore of the plan of the recently appointed President of the EU Commission Ursula von der Leyen. These plans are set out in a recently published Commission paper which sets out a road map for proposals for the regulation of AI – WHITE PAPER On Artificial Intelligence - A European approach to excellence and trust.
Under these plans technology like Hawke-Eye could be subject to compliance with new laws that seek to eradicate mistakes like this disallowed goal. The proposals are aimed at promoting trustworthy AI so that consumers will feel comfortable in using that AI.
Technical Robustness and Accuracy
It is intended that AI systems, and certainly high-risk AI applications, must be technically robust and accurate in order to be trustworthy. That means that such systems need to be developed in a responsible manner and with proper consideration of the risks that they may generate. Their development and functioning must be such to ensure that AI systems behave reliably as intended. All reasonable measures should be taken to minimise the risk of harm being caused. To ensure this, the following elements should be considered:
Requirements ensuring that the AI systems are robust and accurate, or at least correctly reflect their level of accuracy, during all life cycle phases
Requirements ensuring that outcomes are reproducible, and
Requirements ensuring that AI systems can adequately deal with errors or inconsistencies during all life cycle phases
Hawke-Eye’s explanation of the issues on this occasion give rise to uncertainty as to whether the above criteria would (hypothetically) be met. It is apparent that there was some issue with either the data sets underpinning the technology or the ability of the machine to cope with a situation that is not apparently abnormal. Under the Commission proposals Hawke-Eye may need to go through pre market testing and compliance to demonstrate its systems could cope with the above requirements. It may need to show in the future how its technology could cope with errors or inconsistencies like what happed in the Sheffield United and Aston Villa game.
There are threshold criteria under these Commission proposals regarding the types of AI that will be targeted by the new laws. Initially sectors like healthcare, transport, energy and parts of the public sector will be targeted and only high risk AI applications within those sectors. The likelihood of Hawke-Eye falling within those criteria is unclear. It may also fall under a secondary voluntary labelling regime.
The specific regulation of AI products and services in the EU is now advancing at pace and manufacturers and suppliers in this space need to sit up and take notice. The EU is demonstrating a significant appetite for the regulation of all areas of technology and it is no surprise that it is now tackling the technological revolution that is AI. For the moment Hawke-Eye has time to work though it’s technical issues but in time the EU may being looking under the hood of its technology and requiring it to conform to specific AI standards being before allowed back on the pitch.
For more information, contact a member of our Technology team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.