Provocatype 2 – this time it’s personal

Managed to speak to a few people about the provocatype I’ve been working on. It’s still in Lucid and I’ve only got as far as stitching together a relatively simple journey – what’s available to someone relatively healthy.

The fidelity is at the right level – people are engaging and want to give input. I can take their thoughts onboard without any stress.

There are some limitations though. It’s too easy to think of this as a single journey and a single session. I need to think about how a user can return to a half‑complete journey. I also want to show the potential amount of branching, but that probably needs to be done at a lower‑level of detail.

I think making some working software will help with those limitations, but to do that I basically need to build a fairly complex prototype that can handle (fictional) data about people, their health, activity, preferences etc, and match them to services. Maybe something hardcoded is more pragmatic, but might not support us in thinking through problems in as much detail.

Edge cases

I can’t say much about this, but in one of our services, the users are hitting an edge case that was anticipated but nobody expect to happen. It’s benign and basically an admin issue.

I wasn’t part of the discussions for this service when it was being decided how much of a risk it was and whether it was worth mitigating. I’ve had plenty of discussions like this though. I find them really fun. Trying to sense out where the edges of a problem lie, how likely it is that a user will do it and what the consequences are. They’re the crunchy problems of real-world service design. They’re the dark matter of service design, if you get the decisions right, no one will ever know.

Measuring success

I had a couple of chats with people about how we measure if a team is working well. When you are working in an environment where services are live and has real users, the measures are usually pretty obvious. Most of our services are in the early stages and are at the mercy of externalities that make doing the job of shipping services and getting to real users a slow process. In the meantime, how do we judge if the team are doing well?

There are some principles for the phase of work we’re in:

Move fast

Deliver quick, not perfect services. Work‑arounds and manual processes are good enough for testing.

Create the end‑to‑end for a few

Focus on complete journeys for specific cases rather than scalable component parts for everyone.

Learning not platforms

Worry about connecting systems and scaling later.

Don’t assume we will use the NHS app (for pilots, although that is where our services will live eventually)

Stop if things aren’t working

The use cases are our best way of testing risky assumption.

As long as the teams are embodying these, I want to give them the trust and room they need to do their work. But there is a balance between reporting and overhead, between support and interference. I’m still learning how to do.


Two of our teams published design history posts recently. They’re great summaries of what the teams are doing: