Search
Close this search box.

Serious Cybernetics: The Commission’s AI White Paper, the US government’s AI Memo

1. Introduction

This blog looks at the legal aspects of the regulatory framework for AI set out in the European Commission’s AI White Paper, published  on 19 February 2020. We also compare and contrast it with the US government’s draft AI Memo from January 2020.

AI lawyers will be studying the White Paper with interest because it’s an early articulation of the Commission’s thinking in this area. But, unsurprisingly for a document that marks the start of an early-stage public consultation, the discussion is mainly suggestive/indicative. We’re still talking about high-level ideas and concepts, not a draft ‘AI Regulation’.

The main White Paper proposals we’ll be considering in this blog are: (i) the two-limbed test to determine if an AI application is ‘high risk’ (para. 3.1), (ii) mandatory requirements for ‘high risk’ AI (para. 3.2), (iii) the conformity assessment regime (para. 3.3), and (iv) the AI governance framework (para. 3.4).

We also set out a bit of the context in para. 2, both on the European and US sides.

2. Context

2.1       Europe

The White Paper is part of a broader suite of documents (all released on 19 February) looking at the EU’s data strategy and other policy aspects of what the Commission calls the “human-centric development” of AI. The other key documents are:

  • A report on the safety and liability aspects of AI, IoT and robotics – this looks at the implications for and potential gaps in the EU’s existing liability and product safety framework caused by AI, IoT and robotics.
  • A European strategy for data – a five-year strategy for policy and investment to promote the EU’s data economy.
  • A document entitled ‘Shaping Europe’s digital future’ – a policy document setting out key actions the Commission will take in the coming years (and dates), among other things.

The White Paper also builds on several Commission documents on AI published in recent years. The key ones are:

Where these older documents dealt with the AI regulatory framework, the discussion focussed mainly on dealing with the ethical implications of AI (the HLEG’s Guidelines in particular). What’s new about the White Paper is the focus on the binding legal aspects.

2.2       United States

The US government pipped the Commission to the post by publishing its own AI regulatory guidance in a draft Memo in early January 2020.

The Memo offers less in the way of regulatory detail than the White Paper. And, in its sometimes strongly worded desire to “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth”, signals reasonably clear philosophical differences.

It will be interesting to see how the Commission’s expansive regulatory field of vision plays into this dynamic. Taking a similar approach as it did with GDPR, the Commission is clear that an AI regulation should bite all over the world: “In the view of the Commission, it is paramount that the requirements are applicable to all relevant economic operators… whether they are in the EU or not.”

3. Future Regulatory Framework

The Commission’s proposals are clearly in their early stages, and they might look very different after a few rounds of public consultation. But there is some interesting new detail and a clear direction of travel towards a standalone EU legal instrument on AI.

This section picks out the key details.

3.1       Risk-based approach – is the AI high risk?

The White Paper sets out a two-limbed test to determine if an AI application is ‘high risk’. If an application is ‘high risk’ then mandatory requirements would apply (see paragraph 3.2 below). The White Paper gives detail on the limbs:

  • Limb 1 – Sector. Is the AI application “employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur”? High risk sectors “should be specifically and exhaustively listed… For instance, healthcare; transport; energy and parts of the public sector” and the list would be updated from time to time.
  • Limb 2 – Use. Is the AI application “used in such a manner that significant risks are likely to arise”? For example, does the use give rise to legal (or similarly significant) consequences for an affected individual or company? Is there a risk of injury, death or significant material or immaterial damage? Does the use produce effects that are unavoidable?

The White Paper is clear that the rules should be proportionate, so this test would be cumulative – both limbs must be met to be high risk. But there might also be “exceptional instances” where an AI application could be ‘high risk’ regardless of sector (e.g. facial recognition) and this would override the test.

The test has already attracted US criticism as being unhelpfully simplistic. Speaking the day after the White Paper came out, CTO of the US, Michael Kratsios, found “room for improvement” in an approach which “clumsily attempts to bucket AI-powered technologies as either ‘high risk’ or ‘not high risk’.” The US preferred a “spectrum of sorts”, Kratsios said.

3.2       Mandatory requirements for ‘high risk’ AI

According to the White Paper, mandatory requirements would only be triggered for AI applications that are ‘high risk’, either because they satisfy the two-limbed test, or because they are otherwise “exceptional”.

The White Paper is more circumspect about the mandatory requirements themselves – it gives six “key features”, rather than concrete proposals. Perhaps this isn’t surprising, as the nature and extent of the requirements are likely to be keenly debated during the consultation phase.

Equally, there are plenty of open questions about where the mandatory requirements sit in the broader regulatory context: e.g., How far should they be captured in separate technical standards? How would they interact with other AI frameworks, particularly the HLEG’s Ethics Guidelines?

What the White Paper does say is that the mandatory requirements “could consist of the following key features” (and gives a few examples):

  Key feature Example
1. Training data “Requirements ensuring that AI systems are trained on data sets that are sufficiently broad”
2. Data and record-keeping Keep “accurate records regarding the data set used to train and test the AI systems”
3. Information to be provided “Citizens should be clearly informed when they are interacting with an AI system and not a human being”
4. Robustness and accuracy “Requirements ensuring that outcomes are reproducible”
5. Human oversight “Monitoring of the AI system while in operation and the ability to intervene in real time and deactivate”
6. Specific rules for some AI use cases, e.g. remote biometric identification N/a

 

3.3       Conformity Assessments

The Commission is currently considering a process of “objective prior conformity assessment” to ensure ‘high risk’ AI applications meet the mandatory requirements (see paragraph 3.2 above).

Conformity assessments are a familiar part of EU product legislation: an “ex ante” test to check that a product meets requirements before it is placed on the market. AI conformity assessments could set out procedures for testing, inspecting or certifying AI applications. And they could look further under the bonnet by assessing an AI application’s algorithms or training datasets.

The White Paper picks out several nuances that need to be addressed in a conformity assessment for AI applications:

  • It might be difficult to test conformity with some of the mandatory requirements – the “information to be provided” requirement is used as an example (see para. 3.2 ‘Key feature’ #3 above).
  • If the AI application evolves/learns from its experiences, do you need do retest it?
  • Training datasets and programming and training methodologies would need to be tested.
  • There would need to be a remediation process if an AI application failed its conformity assessment.

It seems like there is some overlap with the US approach on conformity assessments. The US government’s Memo notes that “targeted agency conformity assessment schemes… will be essential”. It may be that there’s room for a bridge between the regimes here, perhaps similar to what the EU-US Privacy Shield has done for transatlantic personal data sharing.

3.4       AI governance framework

The White Paper briefly suggests that the European AI governance framework might consist of “a network of national authorities, as well as sectoral networks and regulatory authorities, at national and EU level”. A “committee of experts”, possibly the HLEG or an AI equivalent of the European Data Protection Board, “could provide assistance to the Commission”.

Evidently there’s a lot to be worked through here too: striking a balance between national and sectoral bodies who are close enough to the detail, and a higher level ‘guiding hand’ to ensure the centre holds.

4. Conclusion

Public consultation on the White Paper ends on 31 May 2020. No doubt the Commission’s response, and the view from industry when submissions are published, will make for fascinating reading.

While the focus for now is clearly on new rules specifically for AI, our parting shot is to point out that much of the foundations for AI regulation are already in place. In Europe, GDPR is likely to play a key role. GDPR already: (1) tightly controls automated decision-making using personal data and (2) limits the processing of biometric data (including facial images) to identify individuals. While these restrictions will come under new pressures in an increasingly AI-driven world, industry commentators point out that data law is “a great baseline for achieving AI regulation”.

Share:

More Posts

Send Us A Message