The irony of automating empathy

Foundations and frontiers

In the UK, automated tools now help flag some welfare claims as “high risk”. Freedom of information releases show that a DWP algorithm wrongly flagged around 200,000 housing benefit claims in recent years, creating unnecessary investigations and anxiety for claimants.

What was meant to speed up decisions instead creates confusion and mistrust.

Moments like this remind us why strong design principles are essential safeguards. For over a decade, the Government Digital Service (GDS) Service Standard and its Design Principles have been the bedrock of our profession, guiding us to build services that are simpler, clearer and fairer for everyone.

Principles like "Start with user needs" and "This is for everyone" remind us that the people who most need government services are often the ones who find them hardest to use.

The irony of automation

Now, a new frontier is upon us: artificial intelligence. The promise is intoxicating. AI tools can now augment almost every part of our design process, offering huge gains in productivity and efficiency. For a public sector constantly asked to do more with less, this is a powerful lure.

As we rush to embrace this technology, we risk stumbling into a trap identified decades ago by the human factors researcher Lisanne Bainbridge.1 She called it the ironies of automation.

The central paradox is simple: systems designed to reduce human error can create new, more complex problems. By automating routine work, we can inadvertently degrade the very skills and situational awareness we need most when things go wrong.

This is the challenge we now face. As we begin to automate the core disciplines of user-centred design, are we creating our own set of ironies? In our quest for efficiency, are we at risk of designing the humanity out of our public services?

The consequences are not theoretical. In a benefits application or mental health support service, a flawed understanding of users will fail the most vulnerable, waste public money and erode trust between citizens and the state.

A pattern of paradoxes

This same pattern, this tension between automated efficiency and human-centred principles, appears across design disciplines.

Content design

The drive for AI-generated consistency risks creating content that is clear but lacks the specific compassion needed for users in distress.

Interaction design

The use of AI to generate standard user interfaces risks a bland homogenisation, stifling the creativity needed to solve unique public sector challenges.

Service design

If we optimise only for the smoothest journeys, we leave behind those with complex lives. Yet they are often the ones who need our services the most.

This tension around automation is, arguably, most palpable in the user research discipline.

The importance of human judgement

At the heart of the GDS method is user research. It is how we “start with user needs”. The craft has always been patient and careful: sitting with people, listening, noticing what lies beneath the surface.

AI-powered tools now offer to do in minutes what once took hours, promising speed but also raising new questions about depth and meaning.

Yet here lies the first and most profound irony: in the rush to find insights faster, we risk losing the ability to find the deepest insights at all.

The manual process of reading, re-reading and coding transcripts is how we move beyond what users say to what they truly mean. This slow work builds a researcher's intuition. It is how we grasp unstated needs and subtle meanings.

An AI might identify "frustration" as a theme in interviews about a benefits application. A human researcher understands this frustration might be rooted in a feeling of powerlessness, a history of mistrust or the cognitive burden of managing a chronic illness. Short-circuit the process and we get the what but lose the why.

Automating tasks, not thinking

The efficiency gains from these tools are real and can free us from drudgery. The challenge is to approach AI with a critical, human-centred mindset. We should treat AI as a fallible assistant, augmenting our work while keeping our judgement at the centre.

Automation should be a series of deliberate choices. The model developed by Parasuraman, Sheridan and Wickens breaks any task into four stages: Information Acquisition, Information Analysis, Decision Selection and Action Implementation.3 At each stage, we can decide the right level of automation, from manual to fully automatic.

Take a typical user research project with 12 hour-long interviews about a complex new service.

The work that cannot be automated

The example above is not the whole picture. Other questions such as AI-specific informed consent and how we handle personal data deserve articles of their own.

The high-risk flags in welfare claims show both the promise and the danger of automation. The speed is real, but when the reasoning is hidden, trust drains away and the system creates more work rather than less.

This is exactly where design principles matter most. They are the compass that helps us decide what to automate and what to leave to people.

Our future will be shaped by our uniquely human abilities: empathy, critical thinking, ethical judgement and creative problem-solving.

AI can give us a first draft, but the hard work of making it simple, making it human and making it for everyone is ours.

That is the work that truly matters and it cannot be automated.

Sources and further reading

  1. Bainbridge, L. (1983). Ironies of automation. Automatica, 19(6), 775–779.
  2. Prescott, M. R., Yeager, S., Ham, L., et al. (2024). Comparing the efficacy and efficiency of human and generative AI: Qualitative thematic analyses. JMIR AI, 3:e54482.
  3. Parasuraman, R., Sheridan, T. B., & Wickens, C. D. (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics – Part A, 30(3), 286–297.

Build intelligent services people trust.

Bringing human-centred design to make technology powerful and people-first.