https://dataingovernment.blog.gov.uk/dart-charge-service-assessment-3/
Dart Charge - Service Assessment
The Dart Charge is the new ‘free-flow’ charging scheme being developed for the Dartford Crossing to ease congestion. Users of the Dartford-Thurrock River Crossing of the Thames on the M25 are required to pay a charge to manage congestion of the bridge and tunnels. Dart Charge is a new remote payment system which will reduce congestion and ease traffic flow at the crossing.
Department / Agency:
DfT / HA
Date of Assessment:
18/11/2014
Assessment stage:
Beta
Result of Assessment:
Not Passed
Lead Assessor:
S. Wood
Service Manager:
I. Lewis
Digital Leader:
B. Etheridge
Assessment Report
The Dart Charge service is seeking permission to launch on service.gov.uk as a Beta service.
Outcome of service assessment
After consideration the assessment panel concluded that the Dart Charge service should not be given approval to launch on the service.gov.uk domain as a Beta service. The service standard is grouped into themes, and stated below are the assessment panel’s views on where the Dart Charge service currently stands against these.
Reasons
Assisted digital and digital take-up
The service team had not undertaken sufficient research into assisted digital (AD) users to understand their barriers, so could not show how proposed support meets user needs. The service was too early in development to meet the service standard for beta, but the service manager understands AD principles and has plans in place to undertake research with AD users over the next few weeks. The team has tested the digital service with people who are lower on the digital inclusion scale and made improvements based on findings.
It was recognised that there are plans in place to test SMS messaging and automated telephone transactions. The panel would like to see the results of these tests, and actions based on these results, when the service returns for reassessment.
It was noted that the team had put in place telephone and face-to-face assistance. However, the team needs to put into place proper AD testing in order to demonstrate that the channels and providers meet the needs of the AD community. The panel would be looking for, amongst other things, evidence that users of these channels are aware of how to make payments, and for these channels to have low waiting times. This is particularly important because the payment window is short.
Finally, the panel had concerns about the call centre’s initial ability to handle volumes. The issue here is that AD users should not be penalised.
Technology and security
The assessment team’s confidence on the security assessment is based on the service team’s confidence in the thoroughness of its formal accreditation process, including PCI accreditation and end-to-end security architecture assessment by specialists in the team and the Highways Agency.
The service does not conform to the GDS policy on open standards and common government platforms. Commercial considerations may be hindering the process of making APIs open source. The panel would like more information on this when the service returns for reassessment.
The panel questions how the service would function if a user had JavaScript disabled. Furthermore, accessibility standards have not been implemented or tested with users. The cookie policy is not currently written and should also be published in accordance with the EU legislation on cookies.
It is noted that the test environment was very nearly identical to the proposed beta service. However, the panel was not convinced that the testing regime is sound enough, nor that the expected loads are well enough understood. A service that is not available or has load issues would generate more work for the call centre.
User needs
The assessment team recognised the huge amount of work that the service team has carried out in research and identifying user needs. Testing was carried out in the Dartford area, meaning that representative or ‘real’ users were taking part. Social media activity was analysed and where research did not produce meaningful results (a pop-up session in a Dartford library did not identify enough drivers) an agency was used to improved the quality of user research.
The service manager explained the way stories are prioritised, adequately summarised a top-level user need in one sentence, and was clear that they owned the backlog.
The panel did have some concerns around decisions made in the early stages of the service's development. One being that the use of the Congestion Charge ‘C’ sign could confuse some users into thinking that the London Congestion Charge also covered the Dartford Charge and vice versa. The panel recommends that testing is carried out in this respect to ensure users are confident that this is not the case. Also, the service has not yet been tested with fleet car administrators. Again, it is recommended that this be carried out. A plan for addressing these concerns is in place, and provided that the team has the user research capacity going forward, then the panel is reasonably confident these points will be addressed.
The team
The service team is split over multiple sites, which goes against the principles set out in the Government Service Design Manual. The assessment team is concerned that the development team (in New York) is too cut-off from users of the service, despite efforts to share research with the New York team. The planned Google Analytics function is also to be based in New York (further comment on this below). Furthermore, the team is currently heavily reliant on GDS and DVLA for resource.
The service manager described a useful "steady state" model, albeit this is subject to a successful recruitment campaign and effective training. Funding appears to be in place, and the panel noted that the service manager is both highly competent and empowered to make decisions.
Improving the service
The service team is committed to updating the service and the fact that the product is superior to what was presented in the Alpha review is evidence of this. The resources are not wholly under the control of the service team at present, relying as they do on a supplier (based in New York), with closed code, and GDS researchers. Also, it was noted with approval that the service team aims to split the architecture so that it at least owns the web layer.
Design
The design of the web service appears sufficiently effective. Drop-offs take place at sign-up when users realise they have to pay a minimum of £10. The assessment panel realises that this is due to policy issues, however, the panel recommends the service team research this matter further in order to discover whether there is a figure that does not deter users. This evidence should then be used to push back on policy decisions which do not support user needs.
There are style issues within the service, with examples where wording could be improved e.g. “A return journey counts as two crossings”. The name field could be a single input rather than split into first and second names. The PIN question could be confusing and the confirmation screen requires improvement.
Analysis and benchmarking
It was good to see that goals have been set up, and an initial conversation has been held with the GDS Performance Platform team. The service team should follow up on this initial conversation and engage with the Performance Platform team to build a dashboard for the service. Doubts exist as to whether the Senior Information Risk Office (SIRO) has authorised the use of Google Analytics, with the Standard version of Google Analytics currently in use. There may be a capacity limit imposed by the Standard version once the service goes live.
The service's supplier will take on the analyst role. Given that this is likely to be in New York there is a risk that Management Information (MI) will be provided rather than analysis.
The assessment panel recommends that the analyst should work with the researchers and contribute to the creation of user stories in order to be properly effective. On-page events and funnels should be instrumented to provide better forward-looking analysis insight. Satisfactory plans are in place to provide benchmarking data, although several of the metrics will be gathered in a manual way, when automation could be used.
Testing with the minister
There is a plan in place to test with the minister this week.
Recommendations
The concerns raised above should be addressed before returning for beta re-assessment. In addition, the panel has further recommendations that should be considered before re-assessment. These include:
- Testing on how users relate to the 'C' Congestion Zone signage. This is to ensure users are confident that any London Congestion Charge they have paid does not cover the Dart Charge (and vice versa: that the Dart Charge does not cover the London Congestion Charge).
- Testing the service with fleet car administrators.
- Carrying out research to understand why 20% of users are dropping out of the sign-up option when they discover that the minimum charge is £10. Would a lower figure meet with a higher take-up rate? If so, what is that figure? Any evidence for this should then be used to push back on policy.
- Working to ensure that users who have JavaScript disabled, especially if used in conjunction with assistive devices, can still use the service.
- Ensuring that the digital analyst is not confined to producing MI reports. Instead, to be properly effective, the analyst should work with the researchers and contribute to the creation of user stories, discovering issues, and providing evidence.
- Publishing the list of cookies that are used by the service and why they are needed.
The above should not be seen as a checklist. When the service returns for reassessment the assessment panel will be looking for changes in teams and process that will drive deeper understanding of issues such as these.
Digital by Default Service Standard criteria
Criteria | Passed | Criteria | Passed |
1 | Yes | 2 | No |
3 | Yes | 4 | Yes |
5 | Yes | 6 | Yes |
7 | Yes | 8 | Yes |
9 | No | 10 | No |
11 | Yes | 12 | Yes |
13 | No | 14 | Yes |
15 | No | 16 | No |
17 | No | 18 | No |
19 | Yes | 20 | Yes |
21 | Yes | 22 | Yes |
23 | Yes | 24 | Yes |
25 | No | 26 | Yes |