blog

Why SaaS Vendors Are Racing to Add GenAI in Life Sciences: An Analysis of MIT’s State of AI in Business 2025

Written by Ben O'Brien | Sep 1, 2025 11:22:30 AM

Introduction: The Growing Impact of AI on Life Sciences GxP SaaS Solutions

Artificial Intelligence has evolved from buzzword to business reality for life sciences companies. GxP SaaS vendors are rapidly integrating AI into core systems—from quality management to manufacturing execution—creating both opportunities and challenges for IT quality directors. On one hand, AI promises greater efficiency and insights, but it also brings new complexities around validation, data integrity, and regulatory compliance.

(A structured approach to AI transparency and trust is critical in GxP environments. Learn more about how model cards can simplify AI compliance in our article AI meets gxp: model cards for trust, transparency and compliance.)

Recent guidance updates, such as the FDA’s AI guidances, the draft Annex 22 on Artificial Intelligence, planned revisions to Annex 11, and the recently released ISPE GAMP Guide on Artificial Intelligence, confirm regulators are sharpening their focus on how AI will be governed in GxP environments. The core message is straightforward—vendors supply the AI tools, but regulated companies are responsible for ensuring they work as intended and the data and decisions are fit for purpose.

For a comprehensive deep dive into GxP computerized systems management trends and challenges, be sure to download our exclusive and detailed 2025 State of GxP Computerized Systems Validation in Life Sciences report.

What the Research Says: Vendor-Driven AI Implementation Trends in Life Sciences GxP SaaS

A new MIT report, The State of AI in Business 2025, reveals that 66% of AI implementations across industries come from external vendors rather than internal development. This means most organizations are not building AI themselves, but rather receiving it as a completely new or newly embedded functionality in their existing solutions.

ERA Sciences’ own research aligns with this: in a review of 174 of the most widely used GxP SaaS products, more than 44% have already introduced or are planning to introduce AI features in the coming months.

The takeaway is clear: AI will increasingly appear inside the validated systems life sciences organizations are already running today.

Why This Matters for Life Sciences Companies

For IT and quality leaders, these developments are significant:

  • AI is being delivered through existing solutions
    Vendors are rolling out AI capabilities, such as AI-driven deviation triage or audit trail anomaly detection, within the SaaS systems already in use.

  • Immediate access to innovation
    Companies can benefit from AI without setting up dedicated data science teams.

  • Oversight responsibility remains
    Under 21 CFR Part 11, Annex 11, and the draft Annex 22, life sciences organizations (not the vendor) are responsible for ensuring AI functionality has been validated for its intended purpose.

  • Regulators will expect explainability
    Auditors will not be satisfied with blanket acceptance of “black box” outputs. As defined in the ISPE GAMP Guide on Artificial Intelligence. “Explainability is the degree to which a basis for a decision or action can be explained or how an output or result was reached, in a way that a person can understand”. Organizations will be expected to be able to provide clear explanations for all AI utilized within their processes, including those offered as part of existing SaaS solutions.

Preparing Your Organization for AI Integration in GxP Environments

1. Build Internal AI Literacy

Organizations cannot delegate all responsibility to vendors. Senior Management, IT and Quality teams should:

  • Build an AI Literacy Framework that can be applied across GxP environments
  • Develop a working understanding of how AI generates outputs and its limitations
  • Educate staff on bias, error rates, and the concept of “hallucinations”
  • Establish a common framework for discussing AI across IT, Quality, Business, and Operations

2. Establish Evaluation and Validation Processes

Evaluating AI features requires going beyond traditional software testing, here are some examples:

  • Understand the confusion matrix: Measure true positives, false positives, true negatives, and false negatives for classification and anomaly detection. Vendors may provide this information to you, but this is not always the case. In either case, you should measure the metrics against your own data, as there is no guarantee that the data used by the vendor to train their model aligns with your intended use case, or is even of good quality 

Use these numbers to break down the output into plain English (e.g. a true positive rate of 0.75 would mean that the AI fails to recognize a trigger 1 in 4 times; is this acceptable for your organization?). This helps quantify whether an AI feature is actually fit for its intended purpose in a regulated process, or if a vendor is pushing out a feature before it has been robustly developed.

  • Define intended use cases: AI features must be validated for the specific process in which they will be used, rather than being broadly accepted. Consider measuring outputs directly against your current processes, which do not use AI. If it does not improve your process output, then it is not fit for your intended use. Establish acceptable metrics.

  • Challenge edge cases for AI-driven features: Test how the system behaves under atypical conditions to identify and mitigate hidden risks. Vendors will almost never provide you with the data they used to train their model, and as such, you have no oversight of the quality of this data. Through structured evaluation using your own internal data, you will get a better understanding of how these “black box” models are actually performing.

  • Start validation documentation early: Capture your evaluation work as part of the validation package, including the key metrics and thresholds that must be met for the feature to be accepted into production. This ensures readiness for inspection and clear justification for your decision.

(ERA Sciences shares practical strategies in 5 Best Practices for Improving AI Literacy in a GxP Environment to help teams understand AI’s impact on compliance.

3. Retain the Right to Say No: Ensuring AI Features Meet Compliance and Business Needs

Not all AI features will be relevant to your use case. IT and quality leaders must feel empowered to reject or disable AI functionality if evaluation reveals:

  • Unacceptable levels of false positives or negatives
  • Insufficient explainability or reproducibility
  • Non-alignment with intended use
  • No clear benefit against existing processes

(For a deep dive into validating AI models in pharma with Annex 22 and GxP compliance, see Validating AI Models in Pharma: Annex 22 & GxP Compliance.)

Conclusion: Navigating AI-Driven Compliance with Proactive Oversight and Validation Readiness

AI is becoming a built-in component of GxP SaaS solutions. With two-thirds of implementations delivered via vendors and nearly half of leading GxP SaaS platforms adding AI features, the era of AI-driven tools is here. But while vendors deliver the features, the responsibility for validation, data integrity, and regulatory readiness remains with you.

By building internal AI literacy, incorporating model metrics such as confusion matrices into validation, and asserting the right to decline non-compliant features, life sciences companies can adopt AI responsibly and maintain compliance with regulatory expectations, including those of the US FDA, EudraLex Volume 4, and Annex 11 and 22. 

The path forward is not passive adoption but proactive oversight. More likely than not, AI will be part of your next audit. The only question is whether you will be prepared.

Frequently Asked Questions: AI and GxP Compliance in Life Sciences

Q: What is GxP in the context of life sciences and pharmaceutical compliance?
GxP refers to a collection of “good practice” guidelines and regulations that ensure products are consistently safe, effective, and meet their intended purpose across all life sciences sectors, including GMP (Good Manufacturing Practice), GLP (Good Laboratory Practice), and GCP (Good Clinical Practice).

Q: How does AI impact GxP compliance and data integrity?
AI can automate quality management tasks, streamline data analysis, and support predictive monitoring. However, life sciences companies must validate their outputs, maintain robust audit trails, and ensure that all data meets the ALCOA+ principles (Attributable, Legible, Contemporaneous, Original, Accurate, Complete, Consistent, Enduring, and Available).

Q: What are the regulatory expectations for AI-enabled systems?
Organizations must demonstrate that their AI solutions meet the requirements of FDA 21 CFR Part 11, Annex 11, EU EudraLex, GAMP5, and other relevant global compliance frameworks. This includes documented validation, audit trails, electronic signatures, and transparency on how AI-driven decisions are made.

Q: What processes should IT quality directors follow to ensure GxP compliance when deploying AI features?

  • Build internal AI literacy for staff and management
  • Assess vendor claims and validate AI for your specific intended use case
  • Document performance metrics (confusion matrix, accuracy, false positive/negative rates)
  • Challenge edge cases using your own data during testing
  • Retain the right to accept, disable, or reject AI features based on compliance outcomes

References: The GenAI Divide: State of AI in Business 2025

Preliminary Findings from AI Implementation Research from Project NANDA

https://www.artificialintelligence-news.com/wp-content/uploads/2025/08/ai_report_2025.pdf

Ready to stay ahead of regulators and vendors? Get your copy of The 2025 State of GxP Computerized Systems Validation in Life Sciences benchmarking report and gain the insights and practical guidance you need to prepare your organization for the future of compliance. Fill in the form below to get your copy now!