Elena Hess-Rheingans, Data Ethics Lead, GDS

Building trust in data and AI: the new Data and AI Ethics Framework and self-assessment tool

Two people are illustrated in a warm, cartoon style, one on the left and one on the right. The person on the left, who has their back to the viewer, and is typing on a laptop which is sitting on a table. They are white, their hair is shoulder length and dark, and they are wearing a green t-shirt. The computer screen is dark with rows of coloured squares representing programming. The person on the right looks similar but their hair is now tied back in a pony tail, and they are wearing a white lab coat and safety goggles. They are reaching down to lift up an orange hazard label which is about the size of a book. The label is an orange square with a black exclamation mark in the middle. The person looks like they are being careful as they lift it.

The Responsible Data and AI team has launched an updated Data and AI Ethics Framework and self-assessment tool to help public sector teams innovate responsibly.

Making the Algorithmic Transparency Recording Standard (ATRS) mandatory across government

Bounding boxes are commonly used in AI research to signify where a computer vision algorithm has detected an object in an image. Here the artist has played with this aesthetic: The bounding boxes are 3D-printed frames positioned in the physical environment around objects. Sometimes the objects stick out of their frame.

The use of the Algorithmic Transparency Recording Standard (ATRS) became mandatory for central government in 2024. Read about how the GDS Data and AI ethics team have rolled out the mandate across government and how they have updated the ATRS to reflect learnings from this process.