https://dataingovernment.blog.gov.uk/2025/12/18/building-trust-in-data-and-ai-the-new-data-and-ai-ethics-framework-and-self-assessment-tool/

Building trust in data and AI: the new Data and AI Ethics Framework and self-assessment tool

Two people are illustrated in a warm, cartoon style, one on the left and one on the right. The person on the left, who has their back to the viewer, and is typing on a laptop which is sitting on a table. They are white, their hair is shoulder length and dark, and they are wearing a green t-shirt. The computer screen is dark with rows of coloured squares representing programming. The person on the right looks similar but their hair is now tied back in a pony tail, and they are wearing a white lab coat and safety goggles. They are reaching down to lift up an orange hazard label which is about the size of a book. The label is an orange square with a black exclamation mark in the middle. The person looks like they are being careful as they lift it.
Yasmin Dwiputri & Data Hazards Project / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

The use of data and artificial intelligence (AI) is helping government improve proThe use of data and artificial intelligence (AI) is helping government improve productivity and deliver better public services across many different areas. As data-driven technologies continue to evolve, however, it’s essential that guidance keeps pace to support their responsible development and use.

That’s why we’ve refreshed the Data Ethics Framework - and developed a new self-assessment tool to support teams in making ethical decisions across the life cycle of their data and AI projects. The updated title is the Data and AI Ethics Framework, which reflects these changes.

The Responsible Data and AI team

Our team plays a central role in supporting the ethical and safe adoption of data-driven technologies and AI in the public sector. We help teams across the public sector to innovate responsibly by offering practical tools and guidance, such as the Algorithmic Transparency Recording Standard (ATRS) and the AI Playbook for the UK Government.

Why we updated the framework

The Data Ethics Framework was first published in 2016 and last updated in 2020. Since its inception, it has been helping teams across the public sector and beyond in delivering data projects in fair, transparent and accountable ways.

However, rapid advancements in AI are creating new and complex ethical challenges related to fairness, transparency, accountability, privacy, safety, societal impact and sustainability. As a result, we’ve updated the framework to ensure it remains relevant, and continues to support public trust in the government’s use of data and data-driven technologies.

Our methodology

We’ve refreshed the framework to reflect feedback from over 100 users across government, gathered through surveys, workshops and interviews. As part of our user-centred approach, we carried out:

  • a survey, workshops and deep dives with policy, delivery and technical teams
  • one-to-one interviews with data and AI practitioners, including data governance professionals and data analysts
  • iterative testing and drafting of content, which received feedback from government departments and external experts

This approach has helped us understand how teams currently engage with ethics guidance, where gaps exist, and what practical support is needed.

We also collaborated with external experts such as the Alan Turing Institute, and civil society organisations such as Connected by Data, to ensure the framework is grounded in real-world concerns and reflects diverse perspectives.

What’s new

The new Data and AI Ethics Framework includes several updates:

  • we’ve expanded the scope to cover AI and algorithmic technologies, and updated content on working with data
  • we’ve expanded the ethical principles to also cover privacy, environmental sustainability, societal impact and safety, in addition to fairness, accountability and transparency
  • we’re aligning the framework more closely with other government resources, such as the Model for Responsible Innovation, the AI Playbook and the ATRS, to provide a coherent suite of resources to support responsible innovation
  • we’ve included a self-assessment tool that teams can use alongside the framework to help identify risks and impacts in their data and AI projects

How to use the framework

The framework is written primarily for people who are designing, building, implementing, maintaining or updating projects that use data, AI and algorithmic technologies in the public sector. This may include developers, project managers, analysts, statisticians, policy, commercial or procurement professionals and more.

Users can work through the framework individually or as a team while planning, implementing and evaluating a new project. Each part of the framework is designed to be regularly revisited throughout a project’s life cycle, especially when changes are made. The self-assessment tool can be a helpful component in a robust governance process for data and AI tools within an organisation.

This publication promises to have a profound and lasting impact at a critical juncture in the UK’s AI journey. At a time when ‘frontier AI’ systems present a new scope and scale of risks and opportunities, this effort by the GDS Responsible Data and AI Team will steward a trustworthy path forward for civil servants and AI developers as the rapid adoption of AI begins to radically transform government services and public administration across Great Britain. The framework will anchor and advance the UK’s pacesetting global leadership in responsible AI design, development, deployment and procurement for a long time to come.”

Professor David Leslie

Director of Ethics and Responsible Innovation Research, The Alan Turing Institute, and Professor of Ethics, Technology and Society, Queen Mary University of London.  

Plans going forward and how you can help

We’re launching the new Data and AI Ethics Framework in an iterative way, with plans to publish an updated version in the first half of 2026. Over the next months, we’re: 

  • doing more user testing to understand what works well, areas for improvement, and how people are using the framework
  • exploring options for developing an interactive online version to make the framework easier to use
  • planning on hosting information sessions to support use of the framework across government

We’re keen to hear how the Data and AI Ethics Framework and self-assessment tool works for you. If you have any suggestions, feedback, questions, would like to be informed about upcoming events or would like to participate in user research, please email us at GDS-Responsible-Data-AI@dsit.gov.uk.

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.