Everyone Is Right: Why Value Conflict is the Work of Designing For MedTech
- Michael Fergusson
- 5 days ago
- 7 min read

The meeting was going well until it wasn’t.
We were reviewing a “simple” feature change: when a patient’s status crosses a threshold, the system will send an alert and prompt a next step. The clinician in the room wants the alert early: “If we wait, we miss the window to respond.” The nurse manager wants it later: “If you fire this during transitional times, it will be ignored or resented.” The informatics lead wants it documented: “If it isn’t charted, it doesn’t exist.” The product manager wants it smooth: “If it’s annoying, it won’t be used.” Then someone asked, with real fatigue: “Can we just pick the safest option?”
The obvious problem is that everyone is right.
Because what looked like a single design decision was actually a settlement between competing goods: speed versus safety, autonomy versus standardisation, signal versus noise, patient benefit versus staff burden, clinical judgement versus protocol. Those goods weren’t abstract. They were tied to real constraints: staffing ratios, liability, accreditation, fragmented systems, trauma from prior rollouts, and a simple fact that healthcare teams are already at capacity.
We left the meeting with a compromise that satisfied nobody. The alert would trigger in a narrower set of conditions, with a softer tone, and with an optional note. It was “balanced.” It was also unclear who owned it, how success would be measured, and what we’d do if it failed in practice. We had reduced a value conflict to design compromise, and called it “alignment”.
That pattern is common in complex healthcare projects, in my experience: well-intentioned teams end up working at cross-purposes, not because they disagree about the goal, but because they disagree about what must be protected while pursuing it.
Value Conflict Is Not A Failure Of Alignment
Healthcare teams often talk about “alignment” as if it’s the cure: get stakeholders in a room, harmonise priorities, write requirements, ship the solution. When alignment fails, we blame communication, personalities, or change resistance.
In reality, value conflict is not a failure state. Value conflict is the work.
Complex healthcare projects force trade-offs between legitimate values that cannot all be maximised at once. If you design for speed, you can harm safety. If you design for safety, you can slow care and increase workload. If you optimise for clinician control, you can undermine standardisation. If you optimise for standardisation, you can undermine judgement and local adaptation.
The uncomfortable truth is that many conflicts can not be resolved; but they can be settled. You don’t “solve” them. You choose a settlement that favours some values more often, in some contexts, and you accept the downstream consequences.
The problem is not that these settlements happen. The problem is that they happen implicitly, in fragments, at different layers of the system—policy, workflow, interface, training, metrics—without anyone naming the settlement or instrumenting its impact.
Why Good Teams End Up Working At Cross-Purposes
People argue from different accountability regimes
A clinician’s primary risk is patient harm. A nurse manager’s risk is missed care and burnout. Informatics carries integration and data integrity. Quality carries auditability and standard work. Legal carries liability. Product carries adoption and usability.
These are not “perspectives.” They are job-shaped survival strategies. Each role is trained—often painfully—by the consequences of past failures. When they speak, they’re not just stating preferences; they’re defending a boundary of responsibility.
If you don’t name those accountability regimes, you will misread value arguments as opinion or politics.
The system punishes the wrong failures
Healthcare frequently punishes visible failures more than invisible ones.
A documented adverse event triggers investigation. A thousand micro-delays that degrade outcomes over months rarely do. An alert that is missed and leads to harm is visible. A badly designed alert that creates constant cognitive load and quietly erodes trust is hard to pin to an outcome.
So teams bias toward what is defensible, not what is effective. They choose options that reduce institutional risk even when those options increase operational burden.
The project lacks a shared theory of success
A theory of success is a plain-language hypothesis about how this intervention will verifiably produce better outcomes in the real world, through specific people, workflows, decisions, incentives, and constraints.
Most projects have a theory of benefit (“more data will improve care”) but not a theory of success (“we are successful when a clinician sees our data, trusts and acts on it, with minimal disruptions to existing workflows”). Without a theory of success, teams fill the gaps with their own local theories. They talk past each other while believing they’re talking about the same thing.
Decisions get made at the wrong resolution
Teams try to “decide” at the level of features (“add an alert”) instead of at the level of commitments (“who is responsible to act, under what conditions, and what counts as action?”).
In complex domains, you don’t ship features. You ship commitments embedded in socio-technical systems. Features are just the visible edge of a commitment.
Design Governance Versus Production Engineering: The Feedback Loop Gap
In production engineering, weak decisions are punished quickly. You ship code, it breaks, telemetry spikes, incidents occur, post-mortems happen, rollbacks follow. There are tight feedback loops: tests, monitoring, error budgets, SLOs, performance regressions, alerts that wake someone up at 3 a.m.
Design governance often has the opposite:
Slow feedback: you discover failures months later, if at all
Low observability: you see screens, not the lived workflow
Weak enforcement: standards exist, but drift is tolerated
Misleading signals: “usability testing passed” becomes a proxy for “this will work”
In design systems, I’ve seen how entropy wins when governance relies on documents, councils, and good intentions. Standards degrade because deviations are easy and detection is hard.
Healthcare design suffers from the same physics. If you do not build feedback loops around Key Moments, your value settlements will drift. Local workarounds will become the real system. And your beautifully balanced design will quietly turn into something else.
Key Moments: Where Value Conflicts Become Real
A Key Moment is the point in a workflow where new information becomes salient and a decision or action locks in downstream consequences. It is where values stop being discussed and start being enforced.
In the earlier example, the Key Moment was not “the alert exists.” The Key Moment was: a clinician is notified during a busy shift, interprets the signal, decides whether to escalate, and becomes accountable for that choice.
If you want to understand value conflicts, don’t start with stakeholder positions. Start with Key Moments. Map where decisions actually happen: handoffs, thresholds, escalations, documentation, overrides, defaults. Then ask: what values are being protected at each Key Moment, and at whose expense?
This is also where teams can make progress. You can’t reconcile values in the abstract. You can only negotiate them at the moment a real commitment is made.
Value Settlements: Making Trade-Offs Explicit And Revisable
A value settlement is an explicit statement of how a conflict will be balanced in a specific Key Moment, along with what will be monitored to detect when the settlement is failing.
It sounds like this:
“At the moment an alert fires for X, we prioritise avoiding missed deterioration over minimising interruptions. Therefore the default is to notify role Y within Z minutes, with an acknowledgement required. We accept an increase in interruptions, and we will monitor alert burden and response latency. If burden exceeds threshold A or response latency does not improve, we will revise the rule.”
That is honesty.
A settlement makes three things visible that are usually hidden:
The value trade-off (what wins here, what loses here)
The decision rights (who owns the commitment)
The evidence plan (what will convince us to revise)
Settlements also prevent “compliance theatre.” In many organisations, compliance becomes a moral shield: “We followed the rule.” But rules are not outcomes. A settlement ties the rule to a success claim and makes that claim testable.
Continuous Monitoring: Instrumenting Your Value Settlements
When people hear “monitoring,” they often think of surveillance or compliance. That’s a category error.
The point is not to police our customers or stakeholders. The point is to instrument the system so we can detect when our value settlements are failing, and to do it early, with humility, before harm or burnout accumulates.
Practically, this means four moves.
Choose a small set of Key Moments to instrument. Not everything. The moments that carry the highest stakes: escalation thresholds, handoffs, overrides, alarm acknowledgements, documentation triggers, discharge decisions. Pick the ones where a bad settlement will amplify downstream.
Define value signals, not just performance metrics. Performance metrics are about outputs (time-to-action, readmissions, length of stay). Value signals are about trade-offs (interruptions per shift, override rates, time spent charting, confidence in decisions, perceived ownership, trust calibration).
Some signals are quantitative; some are qualitative. In complex work, you need both.
Maintain a decision log that links settlements to evidence. Treat settlements like production changes: record what you changed, why, what you expected, what you watched, what happened, and what you revised.
This turns “stakeholder feedback” into organisational learning.
Build a cadence for revision that respects reality. Monthly is often too slow for high-risk workflows and may be too heavy for small teams. A better pattern is event-driven review: review when signals cross thresholds, when a critical incident occurs, or when frontline feedback clusters around a Key Moment.
This mirrors how mature engineering teams operate: not constant meetings, but clear triggers for attention.
A Note Of Tension: Rigor Versus Reach in Designing for MedTech
Clearly, we wish to avoid creating a new methodology tax: more artefacts, more meetings, more jargon. Likewise, there’s no point in building a beautiful framework that cannot survive resource constraints. I believe the way through is to keep the unit of work small and concrete: one Key Moment, one settlement, a thin monitoring plan, and a short review loop. If it doesn’t reduce confusion and rework within weeks, it’s not helping.
Hopeful path forward: design systems for complex domains
Healthcare doesn’t need more alignment workshops. It needs better ways to make value conflicts explicit, settle them where commitments happen, and revise them based on evidence rather than ideology.
A mature design system for complex domains would look less like a style guide and more like an operating system for decisions:
Key Moments mapped to real workflow
Settlements stated plainly, owned clearly, and monitored lightly
A shared theory of success that links design choices to human behaviour and organisational constraints
Feedback loops that are tight enough to learn, but respectful enough to sustain
That is the hopeful claim: not that we can eliminate value conflict, but that we can stop pretending it isn’t there. When everyone is right, the goal is not consensus. The goal is a settlement you can defend, measure, and improve, without turning care into a checklist.
If you want a starting point, take one contentious feature request and ask a different question: “What is the Key Moment this feature is trying to change, and what value settlement are we actually making there?” Then write the settlement down. Make it testable. And treat revision as success, not failure.
Because when designing for MedTech, progress isn’t the absence of conflict. Progress is the ability to learn from it.







Comments