Two ways of seeing
When automation enters a service, the first question is often: does it work? We measure performance, throughput and accuracy. These are system-centric criteria. They tell us how well the technology runs.
But services are not defined by throughput alone. They are defined by whether people can achieve their goals. User-centric criteria ask a different set of questions: is the process clear, is it fair, and does it build trust?
The lens we choose matters. It shapes whether automation becomes an enabler of better services, or an invisible barrier that quietly erodes them.
The system-centric view
System-centric measures dominate because they are easy to define and compare. Common criteria include:
Accuracy
How accurate is the system against a baseline?
Speed
How fast does it process?
Cost
How much does it cost per transaction?
These are not trivial. A system that is unreliable or expensive will undermine confidence. Yet research shows that judging automation on these terms alone is insufficient. Parasuraman and Riley (1997) highlighted that the “misuse and disuse” of automation often comes not from performance failure but from how people experience its role and limits.1
The user-centric view
User-centric criteria shift the frame. They begin with the person, not the machine. They ask:
- Is the process transparent enough to be understood?
- Does it leave people with agency when things go wrong?
- Can it accommodate different contexts and needs?
Consider digital identity checks. From a system-centric perspective, automated verification increases speed and consistency. From a user-centric perspective, the same process can create uncertainty if it is opaque, or frustration if recovery from an error is difficult.
Studies of digital identity adoption underline this point. Whitley and Hosein (2010) argue that systems often succeed technically while struggling with inclusivity and transparency.2 More recent work by Janssen et al. (2020) shows that trust in digital identity is shaped as much by perceived fairness and user control as by raw system accuracy.3
The machine’s lens tells us what is efficient. The human lens tells us what is meaningful. Both are needed.
Towards thoughtful criteria
The challenge is not to discard system-centric measures but to pair them with user-centric ones in a deliberate way. Research identifies three criteria that matter for users: Transparency,4 Recoverability,5 and Fairness.3
Redefining success
Automation is often judged by its own numbers: faster processing, higher accuracy, lower cost. Those measures matter, but they are not the whole story. A system that performs well on paper can still fail the people who rely on it.
Digital identity checks illustrate the point. A service might reject fewer fraudulent applications, yet still lock out thousands of genuine users who cannot resolve errors. From the inside, the system looks efficient. From the outside, it feels hostile and unaccountable.
The danger is not just exclusion at the individual level. When services are judged only by speed and accuracy, mistrust spreads across the whole system. What begins as a design choice becomes a reputational crisis.
Perception is not a side effect, it is the outcome. Ignore it, and you risk losing not just users, but the legitimacy of the service itself.
Sources and further reading
- Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors, 39(2), 230–253.
- Whitley, E., & Hosein, G. (2010). Global challenges for identity policies. Palgrave Macmillan.
- Janssen, M., Brous, P., Estevez, E., Barbosa, L. S., & Janowski, T. (2020). Data governance: Organising data for trustworthy Artificial Intelligence. Government Information Quarterly, 37(3).
- Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
- Sarter, N. B., Woods, D. D., & Billings, C. E. (1997). Automation surprises. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics. Wiley.