In the highly regulated world of pharmaceuticals and life sciences, risk isn’t just a numerical score - it’s a story of what might happen. Regulatory frameworks like ICH Q9 and ISO 31000 anchor our approaches to quality risk management (QRM), but even with all this guidance, a silent disruptor often creeps in: subjectivity.
Subjectivity is both a threat and an opportunity. If unchecked, it clouds judgement, introduces bias, and can lead to decisions that fail to prevent harm to patients. But when understood and managed properly, subjectivity becomes a source of creativity, revealing hidden hazards and unlocking more effective risk controls. The delicate balance of subjectivity management has been acknowledged in the recent FDA and ICH Q9 R1 updates, which bring subjectivity into the spotlight as a factor that can undermine the effectiveness of QRM if not properly addressed.
How subjectivity can lead to catastrophe
In life sciences, risk management is about anticipating and preventing harm to the patient. Here are some challenges we often face:
-
Risk is abstract. We are trying to imagine what might happen in the future.
-
Subjectivity thrives in the absence of hard data, especially in novel or complex systems.
-
Group collaboration, while necessary, often amplifies rather than mitigates this subjectivity.
Ultimately, a failure to imagine what can go wrong may lead to a catastrophe when something does go wrong. And imagination is a subjective process by nature.
Some common subjectivity pitfalls during risk assessment
In theory, risk assessments should be rational and evidence-based but in reality they are often subjective and shaped by our biases. Let’s look at some examples of how these cognitive bias traps show up in risk assessment:
Anchoring bias
“We’ve always done it this way.”
Imagine a risk assessment session for a new lab information management system (LIMS). A participant immediately brings the supplier qualification assessment to the table, and this becomes the focal point. Even if the need for customization is actually the more pressing risk to discuss, the discussion never drifts far from that first anchor. As a result, mitigations are focused on the frequency of supplier re-qualification instead of addressing deeper issues like system configuration or customization errors.
Groupthink
“Nobody wanted to challenge the plan.”
A team is evaluating a cloud-based eQMS implementation. Everyone agrees it's low risk because the vendor is reputable in other industries. One junior IT analyst hesitates but stays quiet—the group seems united. Later, the company experiences a regulatory citation due to inadequate audit trail capabilities, which the analyst had noticed but didn’t flag. The desire to avoid conflict trumped risk identification.
Loudest voice bias
“Mandy said it was fine, so we moved on.”
During a supplier qualification session, Mandy—the head of site quality—dominates. She focuses on GMP documentation compliance, pushing aside logistics risks raised by a new supply chain team member. The result? Supplier delivery failures that impact production timelines—risks that were overlooked because one voice overpowered the rest.
Confirmation bias
“We found what we were looking for.”
A team assesses a legacy system and starts with the assumption that it’s still compliant. They selectively reference older validation reports and skip over emerging vulnerabilities like obsolete encryption protocols. The risk assessment validates their starting belief rather than challenging it. Meanwhile, new vulnerabilities remain unaddressed.
Conjunction fallacy
“It’ll only fail if A, B, and C happen, so it's low risk.”
During a data migration project, the team assumes that system failure would require a cascade: the new system crashing, backups failing, and the restore process being misconfigured. They rate the risk as negligible. But in reality, even one of these failure points would severely disrupt operations. The illusion of complexity makes the risk seem less likely than it is.
Sunk-cost fallacy
“We’ve already invested so much - let’s keep going.”
At the start of a complex project upgrading a clinical data capture system to a significantly updated software version, it was preferred to review the existing risk assessment rather than perform a new assessment. The opportunity to leverage a new audience with more recent data-informed experiences was avoided. Project complexity leads to unexpected, and significant issues during the project and raises concerns about introducing compliance and patient safety risks as the system limps toward go-live.
Biases are everyday barriers to effective decision-making, and they usually operate silently. When these biases go unchecked, risk assessments can fail to uncover the real hazards that could compromise product quality or patient safety. Without structure and awareness, teams don’t realize they’ve been swayed by bias until something goes wrong.
Promoting creative hazard identification
The revised ICH Q9 R1 highlights that organizations are vulnerable to human bias. But instead of eliminating subjectivity, the opportunity lies in harnessing creativity by using methodological collaboration. We use working together alone, a structured approach built around deliberate collaboration cycles.
-
Diverge – Individuals think independently to generate a wide range of hazards.
-
Converge – The team brings those ideas together to align and make sense of them.
-
Decide – The group prioritizes and selects which hazards to carry forward for full risk assessment.
1. Diverge – Think Independently
Each team member begins by identifying potential hazards alone, drawing from their own experience. Without influence from colleagues, everyone is encouraged to tap into their own domain expertise, surface concerns others might not see, and consider risks without fear of being wrong or dismissed.
This prevents anchoring, groupthink, or the dominance of senior voices. A QA lead might identify risks related to audit trail integrity in violation of 21 CFR Part 11, while an IT specialist might flag risks around privileged access that go beyond GAMP5 category expectations. Everyone contributes equally, regardless of role or seniority.
2. Converge – Align and Analyze as a Team
The group reconvenes to share, group, and clarify their hazards. Common themes emerge, gaps are revealed, and insights compound. This step creates shared understanding while preserving the diversity of thought generated during divergence. The group collectively decides which hazards are most relevant for further analysis.
3. Decide – Prioritize and Move Forward
Finally, the team selects which hazards to assess further—based on potential impact, relevance, and urgency. The decision is made through structured facilitation, sometimes using techniques like dot voting or assigning a decider, ensuring bias doesn’t derail consensus.
This cycle helps teams imagine what could go wrong before it does go wrong, while reducing the noise and bias that can dominate traditional group discussions.
Why this works in GxP contexts
In regulated environments like GMP and GCP, the complexity of systems and processes often make it hard to see the whole risk picture.
The diverge/converge method:
-
exposes domain-specific blind spots,
-
encourages consideration of low-probability, high-impact hazards (e.g., loss of traceability, compliance lapses during upgrades), and
-
balances subjective input with structured collaboration—exactly what ICH Q9 R1 prescribes.
Whether you are qualifying a new supplier, assessing a new computerized system, or conducting a periodic re-evaluation, a structured approach like working together alone provides a disciplined yet creative structure that ensures risks are imagined, surfaced, and addressed.
It transforms subjectivity from a threat into an asset - provided it is managed with intention.
Conclusion
Risk management is not just analytical - it’s imaginative. It’s about asking “What could go wrong?” in ways that statistics alone can’t answer.
Subjectivity is not just a negative. When bias is managed, subjectivity is reduced and creative thinking can emerge to reveal hazards others overlook.
Hazard identification benefits from subject matter experts working together alone in cross functional teams to enable more inclusive participation and deeper insights
Catastrophes are preventable when hazards are identified in advance. Creative risk assessments lead to better controls, which protect patients, products, and your organization.
As ICH Q9 R1 rightly puts it, risk-based decision-making is a cornerstone of GxP compliance. But in today’s dynamic landscape - with AI, cloud, SaaS, and more - it’s not enough to follow templates. You need risk maturity, not just risk metrics. With the right tools, mindsets, and methodologies, human factors and creativity can be leveraged for good, especially so in the age of AI.
Comments