Risk as an experience
We often treat risk as a technical issue, something managed by auditors or engineers. But through a user-centred lens, risk is also an experience. People feel it in moments of confusion, when an interface hides the next step, or when recovery paths are missing.
ISO 31000 defines risk as the “effect of uncertainty on objectives.” That effect is not only statistical. It is shaped by design choices that determine whether uncertainty is visible, manageable and recoverable for the people who rely on a service.1
Boundaries and responsibility
A system boundary sets out what is considered part of the system, and what is not. It is a deceptively simple act that shapes responsibility. Boundaries determine which risks are designed for, and which are pushed into the user’s world.
Take a public sector example. A digital service may treat data entry as “in scope” but assume policy interpretation is “out of scope”. When frontline staff encounter a grey area, such as eligibility for benefits, the risk of misinterpretation is externalised onto them and ultimately the claimant. From a UCD perspective, this is poor design. The boundary has been drawn too narrowly, ignoring the real-world context in which users act.
Defining a boundary well means capturing not only the technical system but also the people and processes needed to make it safe and usable. That includes sensors, data services, and algorithms. But also human approvals, escalation paths, and safeguards.
Modelling risk in practice
For practitioners, risk modelling does not need to be intimidating. In human–AI systems the starting point is usually a four-step process:
The language of Failure Mode and Effects Analysis or risk matrices may sound technical. At its core it is a structured way of making explicit what users already feel when a system fails.
Reducing cognitive burden
Poorly timed alerts or opaque suggestions increase the chance of error. A UCD approach asks not only whether an alert is accurate but also whether it arrives at the right moment, in the right format, and at the right level of detail. NIST’s AI Risk Management Framework highlights the importance of contextual transparency.2 The right information should surface at the right time, without overwhelming the user.
When designing AI in high-stakes contexts such as financial services or public sector delivery, risk cannot be handled only in governance documents. It must be designed into interfaces, interactions, and organisational workflows. This means:
Prioritising constraints
That prevent unsafe actions before execution.
Making rationale visible
With confidence cues users can act on.
Keeping overlays minimal
In demanding, high-load environments.
Processing sensitive data
With local storage or privacy-preserving defaults.
Embedding recovery
So that people always have a safe next step.
Recovery as a core journey
Errors are inevitable. What distinguishes resilient systems is whether recovery is treated as a first-class design path. Instead of leaving correction buried in admin portals or helpdesks, recovery should be available at the point of failure.
A rejected plan should show why. A worker notification should include a safe fallback. Usability research consistently shows that recoverability is central to trust.
Risk is not abstract. It becomes visible to users through friction, the absence of control, or the failure to provide recovery. Treating risk as a design material aligns with both international standards such as ISO 31000 and NIST RMF, and long-standing principles of human-computer interaction.
For UCD practitioners the challenge is not just to measure risk but to design systems where it is visible, manageable, and shared responsibly.
Sources and further reading
- ISO (2018). ISO 31000: Risk management – Guidelines.
- NIST (2023). AI Risk Management Framework (AI RMF 1.0).
- Reason, J. (1997). Managing the Risks of Organisational Accidents. Ashgate.
- Woods, D. & Hollnagel, E. (2006). Resilience Engineering: Concepts and Precepts.