Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

UK government publishes public sector guidance on automated decision making

01 June 2021
The government has published a guidance document: Ethics, Transparency and Accountability Framework for Automated Decision-Making, which is designed to help public sector organisations with using automated or algorithmic decision-making systems.  Read our overview of the key points.

On 13 May the Cabinet Office, the Central Digital and Data Office and the Office for Artificial Intelligence published a guidance document: Ethics, Transparency and Accountability Framework for Automated Decision-Making for use by government departments.  The framework was created in response to a review which found that the government should produce clearer guidance on using artificial intelligence (AI) ethically in the public sector.

The guidance starts by distinguishing between:

  • Solely automated decision-making - decisions that are fully automated with no human judgment; and
  • Automated assisted decision-making – automated or algorithmic systems that assist human judgment and decision-making.

The guidance applies to both types of automated decision-making, but notes that the GDPR gives individuals the right (with limited exceptions) not to be subject to solely automated decisions which result in a legal or similarly significant effect.

The guidance sets out a seven-step framework process to follow when using automated decision-making:

1. Test to avoid any unintended outcomes or consequences - prototype and test your algorithm or system so that it is fully understood, robust, sustainable and delivers the intended policy outcomes (and unintended consequences are identified).  Conduct DPIAs (data protection impact assessments) where appropriate.
2. Deliver fair services for all users and citizens - involve a multidisciplinary and diverse team in the development of the algorithm or system to spot and counter prejudices, bias and discrimination.  Areas of potential bias overlap with special category data, e.g. race, ethnicity, sexual orientation and political or religious belief.
3. Be clear who is responsible - work on the assumption that every significant automated decision should be agreed by a minister and all major processes and services being considered for automation should have a senior owner.  Where the decision-making involves the use of personal data, the relevant DPO (data protection officer) should be involved.
4. Handle data safely and protect citizens’ interests - ensure that the algorithm or system adequately protects and handles data safely, and fully complies with data protection legislation.  The department needs to ensure that:

  • implementation aligns with the government Data Ethics Framework;
  • the algorithm and system keep data secure and comply with data protection law; and
  • data governance processes adhere to data protection law - this includes the core principle of data protection by design and default, and where required, completion of a DPIA.

5. Help users and citizens understand how it impacts them - under data protection law, for fully automated processes, you are required to give individuals specific information about the process.  The guidance states that you should work on the basis of a 'presumption of publication' for all algorithms that enable automated decision-making, notifying citizens in plain English when a process or service uses automated decision-making.
6. Ensure that you are compliant with the law – as well as data protection law, this includes the Equality Act 2010 and the Public Sector Equality Duty.
7. Build something that is future proof - continuously monitor the algorithm or system, institute formal review points (recommended at least quarterly) and end user challenge to ensure it delivers the intended outcomes and mitigates against unintended consequences that may develop over time.

As well as setting out this framework, the guidance contains some general points to consider:

  • Algorithms are not the solution to every policy problem - they should not be the go-to solution to resolve the most complex and difficult issues because of the high risk associated with them; and
  • The risks are dependent on policy areas and context - senior owners should conduct a thorough risk assessment, exploring all options.  You should be confident that the policy intent, specification or outcome will be best achieved through an automated or algorithmic decision-making system.

If you need advice on implementing any form of automated decision making in your organisation, please contact JP Buckley.

Further Reading