Designing for MedTech: Key Moments, your Analytic Anchor.
- Michael Fergusson
- 5 days ago
- 7 min read

The nurse didn’t look worried. She looked busy.
It was mid-shift on a step-down ward. A patient’s monitor flickered through its usual rhythm: numbers updating, waveforms unfurling, alarms mostly quiet. As she passed the patient’s bed, she glanced at the monitor just for half a second and kept moving.
Nothing dramatic happened. No alarm. No escalation. No page. The system did what it was built to do: it produced data, continuously, with impressive reliability.
And yet that half-second glance was doing more work than the sensor. The real question was not whether the patient's oxygen saturation changed. It was whether this nurse, right now, with three competing demands and a mental queue of unfinished tasks, should treat what she saw as meaningful enough to act on, document, or hand off. In other words: should she make a commitment?
Many digital health products treat measurement as the bottleneck. More data lead to better decisions, which lead to better outcomes. It’s a comforting story because it makes the hard part look like an engineering problem.
In clinical settings, though, that story often fails because while measurement matters, it is rarely the bottleneck. Decisions are.
Decisions Are Not Continuous
As you design sensors or clinical software, it is easy to picture “decision-making” as a constant process: clinicians always monitoring, always interpreting, always adjusting.
That’s not what we see on the floor, though.
Decisions happen at specific points in time, typically in response to new information. They happen under conditions of time pressure, partial knowledge, competing priorities, and sometimes unclear ownership. In those moments, someone chooses to do something, not do something, or push the decision to someone else. People, resources, and accountability shift. Alternative futures become possible, or are closed off.
Your new product or service changes outcomes only if and when it changes what happens at those points.
I call these points Key Moments.
Key Moments: Where Information Becomes A Commitment
A Key Moment is an observable point in a workflow where new information becomes salient and someone must make a commitment.
The commitment can be a decision, a handoff, a documentation step, an escalation, an override, or a deliberate choice not to act. In every case, though, something is set in motion. This matters because commitments constrain what happens next: they narrow options, trigger downstream work, and establish expectations about who is responsible to carry it forward.
Key Moments are where values stop being abstract. “Safety,” “autonomy,” “efficiency,” “liability,” “workload,” “patient experience” are easy words to agree with in a meeting. At Key Moments, however, they collide. The system—through its defaults, thresholds, timing, and handoffs—makes some commitments easy, expected, and defensible, and makes others hard or socially risky.
That is what your design is doing, whether intentional or not.
“More Data” Isn’t A Product Strategy
In many projects, the design conversation starts with the signal: accuracy, resolution, latency, thresholding, false positives and negatives. Those questions matter. But they are upstream of the product’s real surface area.
The real surface area is the path from data to decision to action. If your product doesn’t reliably shape what happens at Key Moments, “continuous” becomes just “more.” More numbers. More noise. More judgement calls pushed onto the busiest person in the room.
State Changes Versus Commitments
I think it helps to keep one distinction clear:
A change in a patient’s SpO₂ is a state change in the world. It exists whether or not anyone notices it. It is neither negotiated nor committed to. On its own, it carries no accountability, liability, or organisational meaning.
A nurse attending to a monitor display is not a state change. It is a situated action in a socio-technical system. It is the moment when accountability can become active, when a justification is formed, and when values are settled—implicitly or explicitly—in a choice.
That is the moment design can shape.
Key Moment Example: Pulse Oximetry On A Ward
Imagine you are designing a continuous pulse oximetry system for a general medical or step-down ward: a device, display, alerting policy, and perhaps a centralised monitoring service. A typical framing treats changes in oxygen saturation as the key “event,” because physiological change is intrinsically meaningful and objectively measurable.
In that framing, design effort concentrates on sensing, thresholds, and notifications. The clinician’s movement between monitoring, bedside care, documentation, escalation, and handoff becomes “workflow context.” Important, but background.
A Key Moments framing changes the unit of work. It shifts attention away from the signal itself and toward the point where a clinician encounters that information within their workflow and must decide whether, how, and on what basis to respond.
The Key Moment is not “SpO₂ dipped.” The Key Moment is “a nurse glances at the display during rounds and decides whether to treat what they see as actionable.”
That judgement is not incidental. It is where information becomes action, and where responsibility is either assumed, deferred, or implicitly transferred, and yet many systems treat this moment as if it isn’t a moment at all. No explicit acknowledgement. No support for making the decision proportionate and defensible. No structured way to carry the commitment forward.
The system reliably produces data, but it leaves the resolution of responsibility, the justification of action or inaction, and the coordination with future actors to informal practice. In effect, it shapes clinical risk, workload, and liability without making those stakes explicit.
That is not a neutral omission. It is a design choice made by default.
What Becomes Designable When You Name The Key Moment
Once you treat the nurse’s glance as the moment that matters, different design questions come to the foreground:
Does the system clearly signal when attention is required, versus when it is merely reporting?
Are the options at that moment legible (recheck, reposition sensor, assess patient, document, notify, escalate)?
What defaults and thresholds encourage action, restraint, or delay—and are they aligned with the ward’s reality?
How does the system make it easier to explain and document “why I acted” or “why I didn’t”?
How does the commitment travel through handoffs so the next person doesn’t inherit ambiguity?
Notice what these questions are not. They are not primarily about data presentation. They are about shaping the conditions under which proportionate decisions are made under uncertainty.
This is also how evaluation changes.
A team might instrument this moment by tracking how often borderline changes lead to escalation, how frequently decisions are deferred without documentation, how long responsibility remains ambiguous across roles, or how clinicians report their trust in the monitoring system at the point of judgement. These measures don’t assess the sensor’s technical performance. They assess whether the system supports accountable decision-making in the place where care actually turns.
How I Use Key Moments: Designing for MedTech
Key Moments are not a replacement for journey maps, service blueprints, or Jobs-to-Be-Done. I still use those because they are excellent tools for describing flows, experiences, and motivations across time.
Key Moments operate at a different resolution. They isolate the points where responsibility is assumed, deferred, or transferred. They are focal points where trade-offs become operational.
In practice, I find Key Moments easiest to work with when layered onto existing journey or service maps as anchor points for deeper design and evaluation. You map the flow, then you mark the commitments: “Here is where someone must decide. Here is where it becomes someone’s problem.”
Once those moments are named, we can start designing the conditions of specific commitments.
A Running Example: Intermittent Labs Versus Continuous Data
Consider an inpatient cardiac use case where clinicians manage a patient with acute decompensated heart failure using IV diuretics. Under current standard practice, potassium is sampled intermittently by blood draw, often once or twice per day, as part of a safety envelope around dosing.
One challenge is that the clinical situation evolves between these draws, especially for patients with compromised kidney function. Medication changes, fluid shifts, renal function, diet, stress responses, and circadian rhythms can all move potassium “in the background.” Point-in-time labs tell you what the measurement was then and now, but they don’t always give a coherent picture of direction and rate of change.
Now imagine introducing a continuous potassium monitor. It changes the information environment. It may add trends, trajectories, and earlier detection. It also adds new questions that are not “adoption details.” They are the mechanism of value creation:
What counts as meaningful change in a trend?
Who is responsible for monitoring this new stream, in which hours, within which workload?
How does it integrate into the working environment—display, alerts, rounds, handoffs?
How is interpretation documented?What happens when the signal is ambiguous, conflicting with labs, or missing?
A technically accurate sensor can still fail if it produces unreliable decisions and fragmented action—or worse, over-reaction, alert fatigue, or false reassurance. “More data” is not a care strategy. A coherent data-to-decision-to-action pathway is.
Key Moments make that pathway designable. If we can identify the specific decision points where continuous data must do its work, we can design the conditions in and around those moments so preferred outcomes become more likely and more defensible. We can make explicit what choices exist, what values are being traded off, and what success looks like at the moment of commitment.
This is where design reifies intention. It takes the project’s aspirations and turns them into defaults, thresholds, prompts, artefacts, and handoff structures that shape real behaviour.
And over time, it produces more than better screens. It produces an auditable understanding of where a product must succeed to be used safely and sustainably. It also reveals where competitors often fail: by treating the hard moments as incidental and leaving the burden of settlement to individuals at the edge of the system.
Where This Series Is Going Next
This article is the foundation: a sensor doesn’t improve care by measuring more; it improves care by changing what happens at Key Moments—those points where information becomes commitment.
The next article starts where the real friction begins. Because when you design Key Moments in healthcare, you run straight into value conflict. Safety versus speed. Autonomy versus standardisation. Signal versus noise. Patient benefit versus staff burden. Everyone is right, and the project still needs a decision.
That second piece is about how teams can work with those conflicts without reducing everything to compliance: how to make trade-offs explicit, how to avoid “balanced” compromises that satisfy nobody, and how to build feedback loops that let you revise decisions based on evidence rather than ideology.
If you are designing for medtech or digital health, you’ve probably seen a version of this: a product that measured more, but didn’t reliably improve outcomes because the hard part wasn’t sensing… it was commitments. If you have an example, I’d like to hear it. The useful cases are rarely dramatic. They’re usually a half-second glance, a deferred note, a handoff that didn’t land, or an alert that everyone learned to ignore.
Those are the moments that decide whether your product improves care, or is just more data in the room.







Comments