On 21 April the European Commission published a proposal for a Regulation to harmonise the rules on AI. The Commission's stated objectives for the proposed Regulation are to:
- ensure that AI systems used in the EU are safe and respect existing law on fundamental rights and EU values;
- ensure legal certainty to facilitate investment and innovation in AI;
- enhance governance and enforcement of the law on fundamental rights and applicable safety requirements; and
- facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.
The proposal categorises AI tools by risk level, and proposes a different regulatory approach for each level:
- Unacceptable risk: AI systems considered a clear threat to people's safety, livelihoods and rights will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow 'social scoring' by governments.
- High-risk: AI systems identified as high-risk include AI technology used in:
- Critical infrastructures (e.g. transport), that could put the life and health of citizens at risk;
- Educational or vocational training that may determine access to education and professions;
- Safety components of products (e.g. AI application in robot-assisted surgery);
- Employment, workers management and access to self-employment (e.g. recruitment procedures);
- Essential private and public services (e.g. credit scoring);
- Law enforcement that may interfere with people's fundamental rights (e.g. evaluation of evidence);
- Migration, asylum and border control management (e.g. verification of travel documents);
- Administration of justice and democratic processes (e.g. applying the law to a set of facts).
- High-risk AI systems will be subject to strict obligations before they can be put on the market:
- Adequate risk assessment and mitigation systems;
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes;
- Logging of activity to ensure traceability of results;
- Detailed documentation providing information on the system and its purpose to assess its compliance;
- Clear and adequate information to the user;
- Appropriate human oversight measures to minimise risk;
- High level of robustness, security and accuracy.
- In particular, all remote biometric identification systems are considered high risk and subject to strict requirements.
- Limited risk: AI systems with specific transparency obligations: When using AI systems such as chatbots, users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.
- Minimal risk: The proposal allows free use of applications such as AI-enabled video games or spam filters, because these AI systems represent only minimal or no risk for citizens' rights or safety.
The Commission proposes that each member state should establish a competent market surveillance authority to supervise the new rules, as well as the creation of a European Artificial Intelligence Board to facilitate their implementation and drive the development of standards for AI. Additionally, it proposes voluntary codes of conduct for non-high-risk AI, as well as regulatory sandboxes to facilitate responsible innovation.
Organisations can be fined up to 6% of annual worldwide turnover for breaching the prohibition on unacceptable risk systems or the rules on high risk systems. All other breaches are subject to a fine of up to 4% of annual worldwide turnover.
On 23 April the European Data Protection Supervisor (EDPS) published a statement welcoming the proposal, but expressing regret that their earlier calls for a moratorium on the use of remote biometric identification systems, including facial recognition, in publicly accessible spaces have not been addressed by the Commission. The EDPS statement says that the EDPS will continue to advocate for a stricter approach to automated recognition in public spaces of human features, such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals, whether these are used in a commercial or administrative context, or for law enforcement purposes. It will be interesting to see whether and how the proposed Regulation is amended to reflect these concerns.
If the proposal is to result in a binding Regulation, it needs to go through the usual EU legislative procedure, involving the European Parliament and the member states. While the Regulation would not form part of UK law, UK organisations would have to comply with it when processing the personal data of citizens in the EU. In addition, as we have reported in several recent issues of DWF Data Protection Insights, the UK government is currently scrutinising all aspects of AI. See the March 2021 issue, where we reported on DCMS's new national AI strategy, as well as flagging the Intellectual Property Office's call for views on the relationship between AI and intellectual property, and the Trades Union Congress's reports about the use of AI in employment relationships.
If your organisation is proposing to use AI, you will probably need to conduct a data protection impact assessment (DPIA) to identify any risks to individuals and how to mitigate any such risks. Please contact one of our data protection specialists for advice on whether a DPIA is required and, if so, support in conducting the DPIA and addressing its findings.