a pen cup and postits

Service assessments: A welcome update – for government, and for suppliers

A decade has passed since the Government Digital Service (GDS) introduced the Digital By Default Service Standard. In the years since, the service standard, as we know it today, has undergone many updates. But the intent that underpins it remains unchanged. It provides a set of principles that all digital teams must follow which define how you create and run great public services.

For an agency like Zaizi, the service standard has been a positive thing. The standard’s 14 points cover aspects like understanding users, making services simple, solving whole problems for users, and using the right tools and technology.

At Zaizi we specialise in designing and building digital public services. By planning and tracking our work against the standard, we can be sure we’re solving the right problems, both for our government clients and for the public at large. 

We’ve also seen how the service standard is a positive thing for our government clients. They are often new to agile ways of working, and are reassured by the layer of governance the service standard provides. 

A new amber rating

This governance comes through the form of a service assessment, a milestone in which a service team’s work is reviewed by a team of government assessors. A service can expect to go through several service assessments throughout its lifespan. Assessments typically happen at the end of alpha, private and public beta phases. 

At the beginning of May, a change was introduced in the way that services would be assessed. Instead of assessing the 14 points with a “met” or a “not met”, assessors will now mark each point as red, amber or green (RAG). A clean slate of greens means the service can move to the next phase. A single red means reassessment. And now there’s a new amber rating, which basically means: you’re not quite there but carry on, address these points within three months, and keep us updated on your progress. 

I believe this new RAG approach to service assessments will be a very positive update, both for suppliers like us and the government stakeholders who we work with. 

READ: New report shows how the Labour government can avoid the policy-delivery gap

Acknowledging the messiness of agile

It can feel like a lot depends on the outcome of a service assessment. The official guidance is not to over prepare. But the prospect of an intense four-hour assessment means it’s important to know your work back to front. 

It’s also important to point out that things don’t always run smoothly in a project phase. Teams like ours are often working with complex subject matter, with the specific intention of discovering new and unexpected information. The nature of agile working is that things can, and often will, change. Change is good, if it’s for the right reasons. 

The rather binary “met”/“not met” rating didn’t always feel like it acknowledged the inherent messiness of agile working. The official guidance was that “not met” didn’t mean “fail”. But it was a negative consequence of this binary approach that everyone came to think of it that way over time. “Did you pass or fail the service assessment?”. 

Not meeting the standard is one thing if you’re an internal team. You can continue work, and go for a reassessment when you are ready. But it means something different for suppliers like ourselves, who often work to fixed term contracts – and to our  government stakeholders we work with, who have a budget to keep to.

If a service doesn’t meet the standard that has often meant something like a follow-up alpha. This could be another 12 weeks of work, which needs further departmental sign-off. Not good for our clients, and not a good use of taxpayer money. 

A more pragmatic approach

If implemented right, this new amber rating promises to add more pragmatism into this process. An amber rating, with some clear recommendations from the assessors, will give teams what they need to address outstanding issues in a lean and efficient manner, perhaps with a reduced headcount. 

It also feels more in line with the way that agile teams operate – incrementally, acknowledging that research and iteration are an ongoing process. Think “We want to do three more weeks of research with users”, rather than “We need to go back to the drawing board”.

There is still room for improvement in the way that government works with suppliers. Right now, ministers make decisions and digital teams have to go out and prove it is the right thing to do – or not, as the case may be. More digital leadership earlier in the process would lead to better results. It will allow for better defining of the problem, and empower teams to follow the evidence to the right solution. But for now, this is a welcome change and a step in the right direction. 

If you have any questions or would like to find out more about our ways of work, please get in touch.

Thanks for joining us! We’ll keep you informed with regular updates.

Sign up to our newsletter

Consent(Required)
This field is for validation purposes and should be left unchanged.