https://dataingovernment.blog.gov.uk/2017/09/04/introducing-agile-into-an-established-data-science-team/

Introducing agile into an established data science team

Agile working is often associated with newly created digital teams, but what happens when you introduce it into a more established team? This is what I’ve been leading within the Better Use of Data (BUOD) team at GDS. I’ve ended up learning easily as much as I’ve introduced, so this post is about sharing some of that new-found knowledge.  

The BUOD team was set up a few years ago to demonstrate the opportunity for data science to improve the way we work. At the time, there were very few data scientists working within government, so the team was mainly parachuting into different departments to deliver small-scale innovative projects that proved data science could add value.

A lot has changed since then, with more than 750 members in the govdatascience community of interest and projects on a much larger scale, making the team an ideal candidate for trying a more agile approach to getting things done.

The Better Use Of Data team at GDS
The Better Use Of Data team at GDS

Building an agile data science team

I joined the team in March 2017 as their first product manager. When I started, we had 4 data scientists, 2 policy advisers, a developer and 2 delivery managers. We’ve since hired another developer, 2 junior data scientists, a user researcher and a content designer. With the team growing, we decided to take stock of how we were working and think about how we could have the biggest possible impact.

Many of the changes that we made were about moving us towards a more agile way of working. We didn’t rigidly adopt SCRUM or other agile processes where they didn’t work for our needs. Instead, we took the principles of user-focused, iterative design and adapted them to suit our projects. One of those things was the agile discovery and that’s the focus of the rest of this post.

Data science discoveries

I started the ball rolling by running some training for the team on what usually happens in an agile discovery. We agreed that we liked the principles of clearly articulating the problem we’re looking to solve, identifying our user groups, their needs and some of the business and technology constraints we might be operating within. But we also felt that by answering all of these questions we wouldn’t necessarily have all of the information we needed to decide whether it was worth proceeding with the project.

So next, I ran a workshop with the team where we brainstormed the phases of a data science project and the questions we needed to answer before we could progress to the next phase. We also talked about what deliverables we might consistently expect to see upon completion of these phases.

For a data science discovery we came to the conclusion that the questions we needed to answer were:

  1. What is the problem we’re trying to solve?
  2. Who are our users?
  3. What are their needs?
  4. What constraints are we operating within?
  5. What is the technology legacy?
  6. Who else has solved this problem?
  7. What data sources will we have access to?
  8. What data science methodologies could we use?
  9. Are there any ethical implications to the work we’re proposing?
  10. How should we proceed?

The ‘new’ sections we added were points 7, 8 and 9 above - auditing the data sources, exploring possible methodologies and considering the ethical implications of the project. I go into more detail on why we added these sections below.

What data sources will we have access to?

Often, data is the main obstacle that stops a data science project. Either access to it, its quality or its availability. A discovery is supposed to help you make a decision about whether it’s worth proceeding with a project. For us, understanding the data sources available to us is therefore fundamental.

Some organisations don’t even know what data they hold. Others have lots of data but it’s in such bad shape that it’s very hard to work with. Still more have good data but it’s impossible to get access to it for security reasons. Understanding all of these in advance of starting a project greatly increases the chances of success.

What data science methodologies could we use?

I’m not a data scientist, but I’ve learnt very quickly in this team that working with data science is not a straightforward process. Data science projects are experimental by nature. They fit somewhere between academic study, research and development, and software development. We often don’t know what is possible when we start. However, we can do some thinking about which approach would work best.

Sometimes the user needs match a well-known data science technique, for example if you wanted to predict the outcome of an intervention, you would probably choose a machine learning method. Or if you’re trying to get meaning from a large amount of free text, you might consider topic modeling as a way of understanding themes in the text.

But within each of these broad methodological themes (and there are many more than I’ve mentioned here) there are myriad ways of tackling the specific problem. Each approach will have drawbacks as well as benefits, and so our data scientists often spend a lot of time brainstorming and experimenting with several before they find something that works.  

Are there any ethical implications to the work we’re proposing?

This is critical and something we particularly – as civil servants – need to be constantly aware of. It is not as simple as checking whether the data you’re working with is sensitive or contains information that can personally identify someone.

One of the most powerful ways we can use data is to combine it with other data sets to get new insights. But by combining two datasets it is possible to produce new information that could be used to identify someone. This is why there are strict controls around working with data in government.

There are also things to consider when we are using machine learning to make decisions previously made by humans. We train machine learning algorithms to make decisions by showing them historical data and outcomes. They ‘learn’ by spotting patterns in that data.

But if the historical data has bias built into it, the algorithm will continue to make that mistake over and over again. There’s a great TED talk by Cathy O’Neil with some real-world examples of this problem. So how do we ensure we don’t introduce bias into our algorithms?

We also know that even with lots and lots of accurate training, data anomalies will occur, and so predictions made by computers will never be 100% accurate. But often the human prediction would also be inaccurate. So at what confidence level does an algorithm become preferable to a human? And are there any decisions we shouldn’t be outsourcing to a computer?

In the Better Use of Data team we use the data science ethics framework to help us ask the right questions. Work to gather feedback on the use of the framework is ongoing. If you want to contribute, contact Sarah Gates.

Testing testing

Once we’d brainstormed the above additions, a couple of the data scientists in our team took these ideas away and used them to write their first data science discovery report. It worked well as a structure for the project we applied it to, so we’re going to continue using it as a template. If you’ve got ideas for how you think it could be improved, contact me.

Philippa Peasland is the Product Manager for the Better Use of Data team in GDS. You can follow her work-related musings on Twitter or Medium.

1 comment

  1. Comment by Marko Stojovic posted on

    As you've mentioned Cathy O'Neil's TED talk as an instructive reference on ethical consideration, I felt compelled to make a few observations based on my viewing.

    The main point is obviously well made and brought home. However, the examples quoted weren't particularly well suited to demonstrating bias.

    The initial example on teacher grading showcased a case of a rubbish model in which there was a complete absence of correlation between scores for the same teachers - a rubbish model rather than a biased one.

    There was a distinct point to be made about not relying SOLELY on algorithms. Some of the 'wrongful treatment' cases were really about this rather than about bias.

    And finally, the example of offender data being based predominantly on interventions in ethnic minority neighbourhoods is a case of ensuring application of an algorithm to the correct population. In actual fact, a model where a single race was very highly prevalent would NOT be particularly sensitive to variations by race, owing to the near-homogeneity of the race variable.

    Obviously, as Cathy says, the fight is a political one for algorithmic transparency, and the talk does work extremely well on that level and is well worth watching for that.

    Reply

Leave a comment

We only ask for your email address so we know you're a real person