Case · 01 / 04
Government · HMRC

Redesigning how millions submit their self-assessment tax return

An end-to-end redesign of one of HMRC's most-used digital services — replacing opaque, form-heavy flows with a guided, conversational experience that works for everyone.

Role Lead UX Designer
Sector UK Government · HMRC
Duration 14 months
Research rounds 8 rounds · 200+ participants
The Problem

A service designed for the system, not the person.

"I just don't know what they're asking me. The words don't make sense to me — I feel like I'm going to get it wrong and get in trouble."

Every year, millions of UK citizens are required to complete a self-assessment tax return. For many, it's one of the most stressful digital interactions they'll have — an experience built around HMRC's internal taxonomy rather than the mental models of the people actually using it.

The service suffered from dense, jargon-heavy language, a non-linear structure that forced users to hold too much in their heads at once, and no meaningful guidance when people got stuck. The result was a high drop-off rate, a flood of avoidable support contacts, and significant accessibility failures for users with lower digital literacy or cognitive load constraints.

Discovery research identified three core failure modes:

  • 01 Language opacity. Tax terminology presented without plain-English alternatives caused widespread confusion and decision paralysis at key junctures.
  • 02 Structural overload. Users were presented with all sections simultaneously — with no indication of relevance to their specific situation — making the journey feel impossibly large.
  • 03 No safety net. When users made errors or got confused, the service provided no recovery path — leaving them to abandon or submit incorrect returns.
The Solution

From a form to a conversation.

1
Situation-first routing

Instead of presenting the full return to every user, we introduced a short qualifying flow that established their situation — employment type, income sources, changes from last year. The service then surfaced only the sections relevant to them, reducing perceived complexity by over 60% for the majority of users.

2
Plain language throughout

Every question was rewritten in collaboration with content designers and plain English specialists. Legal and tax definitions were retained as expandable contextual help, not as the primary instruction. We tested language variations in 6 of 8 research rounds, iterating until 90%+ of participants could answer each question without assistance.

3
Guided, one-thing-at-a-time flow

We restructured the experience as a single-question-per-page pattern (aligned with GOV.UK Design System principles), with clear progress signposting, contextual hints, and the ability to save and return at any point. Error states were redesigned to be specific, actionable, and non-judgmental.

4
Accessibility by default

Every component was built and tested against WCAG 2.1 AA standards, with additional research sessions conducted with screen reader users, users with dyslexia, and users on low-bandwidth connections. Accessibility was a design constraint from day one — not a post-delivery audit.

The Results

Measurable impact, at scale.

↑34%
Completion rate

More users completing their return end-to-end without abandoning — a direct consequence of reduced cognitive load and clearer language.

↓50%
Support contacts

Halving avoidable calls and web chat requests by giving users the context and confidence to proceed independently.

0
Critical accessibility failures

First version of the service to pass an external accessibility audit with zero critical or serious failures across all tested scenarios.