Skip to main content

https://dataingovernment.blog.gov.uk/2025/05/27/is-it-possible-to-build-responsible-ai-tools-in-human-resources/

Is it possible to build responsible AI tools in human resources?

A graphic representing numerous people profiles across a digital network

In an era where Artificial Intelligence (AI) is increasingly assisting people in the workplace, it is essential that any new technology is introduced in an ethical and responsible manner. This includes being mindful of the risks of biased outputs and ensuring that human judgement remains central to critical decision-making processes.

The IDEA Unit, part of the Data Directorate in the Government Digital Service (GDS), is actively identifying opportunities to embed AI and automation tools across other teams within GDS. This initiative aims to equip public servants with enhanced tools to support their work, in line with the six-point plan of the Blueprint for modern digital government

In 2024, we were approached by GDS’ Capability Team seeking assistance in how they collected and managed their project data, as well as exploring whether AI could be employed to streamline a time-consuming search task.

This team leads career development conversations for Senior Civil Servants (SCS) in the digital and data profession throughout government and collaborates with their line managers to assist in shaping career development pathways.

Our team recognises the challenges associated with handling sensitive data, particularly in Human Resource (HR), and has contributed to the AI Playbook for the UK Government to share guidance on the responsible use of AI.

In December 2024 we published an Algorithmic Transparency Recording Standard (ATRS) record outlining the AI solution we had built. 

In this blog, we demonstrate that prioritising the risks associated with AI during data science development and testing allowed the creation of a useful and responsible tool using a Large Language Model (LLM), even in a business area as sensitive as HR.

Identifying the need for change  

We reviewed the existing manual processes for how career data on SCS was being collected and used to assess people’s suitability for vacant digital roles, so they could be informed of potential career development opportunities. This analysis revealed an opportunity for us to assist our Capability colleagues in achieving significant efficiency savings. 

Responsible development

Principle 4 of the AI Playbook addresses decision-making and emphasises the importance of maintaining ‘meaningful human control at the right stages’. To uphold this principle, we committed to the following:

  • A human would always retain the ability and responsibility for decision-making, ensuring that the process remained transparent. All information and insights underpinning AI outputs would be accessible to the user
  • AI would be deployed only where justified and could deliver significant benefits, concentrating on processes where its capabilities are most suitable
  • AI would not irreversibly replace any part of the process; the option to resort to existing manual activities would always remain available if needed

To address other ethical considerations we ensured that:

  • Any use of personal data was pseudo-anonymised to protect individual privacy.
  • Prompt engineering incorporated measures to mitigate bias, fostering fairness in AI outputs.
  • Testing was conducted to detect and address potential bias within the system.

Succession Select: AI that supports rather than replaces 

The team developed a tool called Succession Select, utilising a LLM in a multi-step process. Here’s how it works: 

  1. Create ideal candidate description: When searching for suitable candidates, Succession Select first creates an ideal candidate career description based on the role’s requirements, serving as a benchmark for fair comparison. This description is informed by the LLM’s training data which includes typical job titles, skills, and responsibilities across various levels of seniority in a wide range of job roles. 
  2. Match candidates: The tool then compares this idealised description against an authorised database of pseudo-anonymised profiles from Senior Civil Servants to identify potential matches. It evaluates factors such as career history, skills and grade, returning to the user a long list of career profiles that are relevant to the specific role vacancy. 
  3. Provide suggestion shortlist: Finally, Succession Select generates a shortlist of candidates for human review, summarising each candidate’s profile and detailing why they could be a good fit for the role, alongside any concerns or issues (such as lack of experience in a specific area or length of time in post).

This streamlined process helps the team to now focus their efforts on assessing candidate suitability, rather than sifting through data to find matching keywords. Additionally, by using a systematic data-driven process, Succession Select promotes a more impartial and unbiased method for internal candidate suggestions.

One user highlighted the transformative impact of the tool: "AI is transforming our internal processes by moving us from manual searching to strategic engagement with the best-fit talent. It enables faster identification of internal candidates by using data on skills, experience, and aspirations - giving us more time to focus on career development and decision-making."

A picture of building blocks being picked up representing individuals within the structure of an organisation

Looking ahead

The improved data pipeline we have established is now fully integrated into the customer’s workflow. The Succession Select search tool is regularly used to identify individuals across government to support important career conversations.

The IDEA unit continues to manage the tool, with plans to enhance the user experience by making it more intuitive and streamlined. Additionally, there are plans to expand the dataset to include more detailed career information, which will improve the accuracy of its suggestions.

Empowering government digital reform

The introduction of Succession Select illustrates that, by understanding the associated risks, AI can be successfully integrated into business workflows in a manner that supports rather than replaces human input. This approach ensures that the incorporation of AI is both responsible and transparent, enhancing productivity while maintaining essential human oversight.

Read more about Succession Select via its Algorithmic Transparency Recording Standard (ATRS) record. There you will also find the ATRS record for our other AI tool, a chatbot called AskOps.

Read the AI Playbook for the UK Government to see the framework and principles which guide safe, responsible and effective use of AI in government organisations.Join the Artificial Intelligence community of practice to connect with people interested in AI across government, attend the monthly meet-ups and receive newsletters with all the latest AI news from across government.

Sharing and comments

Share this page

Leave a comment

We only ask for your email address so we know you're a real person

By submitting a comment you understand it may be published on this public website. Please read our privacy notice to see how the GOV.UK blogging platform handles your information.