Five things you should know about the EU’s draft AI Regulation
The European Commission published its draft AI Regulation on 21st April 2021. The proposals are far-reaching. The regulation won’t form part of UK law, of course, but it does have extra-territorial effect and it will be influential. Here are five things you should know.
Some AI will be banned.
The draft regulation is ‘risk-based’: the greater the risk posed by an AI system, the stricter the rules. This is shown in Figure 1 (taken from the Commission’s accompanying literature).
Figure 1 The draft AI Regulation takes a ‘risk-based’ approach.
At the top of the pyramid is a limited category of AI practices which pose an ‘unacceptable risk’. These are banned. At a high-level the banned categories are:
Social scoring by governments.
The exploitation of vulnerabilities of children.
The use of subliminal techniques.
Live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes (subject to narrow exceptions).
Significant fines are proposed. A company failing to comply with the ban could be punished with a fine of EUR 30m or 6% of worldwide annual turnover (whichever is higher).
‘High risk’ AI is permitted, subject to strict rules.
Working down the pyramid, the draft regulation sets out strict rules for ‘high risk’ AI systems. There are two key questions here: First, when is an AI system ‘high-risk’? Second, what rules will apply to ‘high-risk’ AI systems?
When is an AI system ‘high risk’?
The first question is to establish when an AI system is ‘high risk’. The draft regulation sets out two options:
‘High-risk’ areas referred to in technical annexes. Specific ‘high-risk’ areas are set out in the technical annexes which accompany the draft regulation. (The idea is that the text of the regulation is technology-neutral and future-proof, while the technology-specific annexes can be updated from time to time by the Commission as the technology moves on.) For now, AI systems in the following areas are considered ‘high-risk’:
Biometric identification and categorisation of natural persons.
Management and operation of critical infrastructure.
Education and vocational training.
Employment, workers management and access to self-employment.
Access to and enjoyment of essential private services and public services and benefits.
Migration, asylum and border control management.
Administration of justice and democratic processes.
‘High-risk’ by virtue of application of EU product safety rules. An AI system is also ‘high-risk’ if it is used in a product (or is itself a product) covered by certain EU product safety rules and those rules require the product to undergo a third-party conformity assessment.
What rules will apply to ‘high risk’ AI systems?
Extra rules apply to ‘high-risk’ AI systems. The key requirements are summarised below:
Risk management system
· ‘High-risk’ AI systems must have a risk management system.
· Must consist of a ‘continuous iterative process’ to identify, evaluate and manage risks posed by the AI system.
Data and data governance
· Training data must meet specific quality criteria.
· Technical documentation is required.
· Must conform to specific requirements including a general description, description of developmental processes, etc.
· Must have an automatic event logging feature.
Transparency and provision of information to users
· Must be “sufficiently transparent to enable users to interpret the system’s output and use it appropriately”.
· Must be accompanied by use instructions.
· Must be capable of being “effectively overseen” by humans when in use.
· Human oversight aims to minimise risks to health, safety and fundamental rights.
Accuracy, robustness and cybersecurity
· Must be designed such that they achieve an appropriate level of accuracy, robustness and cybersecurity in light of their intended use.
The extent of these requirements for ‘high-risk’ AI systems (and the breadth of the ‘high-risk’ category itself) will be hotly debated. Expect to see significant differences of opinion as to likely costs of compliance. A company failing to comply with these requirements could be punished with a fine of EUR 20m or 4% of worldwide annual turnover (whichever is higher).
‘Limited risk’ and ‘minimal risk’
To complete the picture, the draft regulation also envisages two lower-risk categories of AI systems: ‘limited risk’ and ‘minimal risk’ (see Figure 1). ‘Limited risk’ AI systems are subject to specific transparency requirements – e.g. an AI chatbot must disclose to a user that they are interacting with a machine. The draft regulation does not impose any obligations on ‘minimal risk’ AI systems, where existing legislation is deemed to be sufficient. The Commission is keen to point out that the “vast majority of AI systems currently in use in the EU fall into [the ‘minimal risk’ category].”
It has extra-territorial effect, like GDPR.
Again, the regulation won’t form part of UK law. But it does have extra-territorial effect, like GDPR, and it will be influential:
Extra-territorial effect. In keeping with the approach taken in GDPR, the draft regulation has an expansive geographical scope. If an AI system is placed on the EU market or if its use affects people located in the EU the draft regulation would be engaged – regardless of whether the actors involved are located inside or outside the EU. Thus, companies in the UK and further afield are potentially in scope (and at risk of sizable fines).
Influence (first-mover advantage). Part of its significance is that the draft regulation is the world’s first comprehensive legal framework for AI. Its approach will prove influential in other countries as they come to regulate AI, including the UK. Equally, if other countries adopt different rulebooks, global tech firms may decide that it makes economic sense to roll out the European standard worldwide (assuming it is compatible with local positions).
One way or another, these rules will be important in the UK.
Get to know the new pan-European supervisory body: the European Artificial Intelligence Board.
In addition to the ‘risk-based’ categorisation of AI practices and systems, the draft regulation devotes much time to creating an EU governance framework for AI. This includes provisions on regulatory sandboxes, national supervisory authorities in EU member states, and a new pan-European supervisory body: the European Artificial Intelligence Board (“EAIB”).
The proposal is that the EAIB will be composed of senior representatives from each of the EU member states’ national supervisory authorities and the European Data Protection Supervisor. It would be chaired by the Commission and tasked with ensuring the regulation is applied consistently across the EU.
In time, the EAIB is likely to become as significant to AI lawyers as the European Data Protection Board is to privacy lawyers. Its guidance on interpreting the rules, complex technical points and novel legal issues will become familiar territory to those practising in the area.
What happens next?
Despite the excitement, we are still at a very early stage. The draft regulation published by the Commission is – in procedural terms – a ‘proposal’. Proposals formally kick off the EU’s complex legislative process.
Up to three rounds of review and comment in the European Parliament and Council will now take place. Both bodies ultimately need to agree the text of the regulation. If/when this happens, it will be published in the EU’s Official Journal and the regulation will enter into force.
Even then the draft regulation envisages a two-year implementation period between entry into force and most of the regulation’s provisions taking effect – much like GDPR between 2016 and 2018. However a few governance-type provisions, including those establishing the EAIB and EU member states’ national competent authorities, would take effect earlier.
To put all this into context, it took over six years for GDPR to progress from an equivalent stage to its May 2018 implementation. If the draft regulation follows a similar timeline, it would not come into force fully until late 2027. Clearly we are still some way off.