Articles

Strategic insights on life sciences technology, regulatory compliance, and industry best practices. Our thought leadership combines practical experience with strategic guidance to help organizations navigate complex challenges.

Also published on Driftpin Substack.


AI Strategy & Governance

A five-part series on building a deliberate AI strategy for regulated life sciences organizations.


AI Foundations

A technical taxonomy of AI approaches and what they mean for validation in regulated environments.


Validation & CSA

Risk-based validation approaches, CSA implementation, and the shift from traditional CSV.


AI Readiness & Implementation

Practical guidance on getting AI projects off the ground in regulated settings.


ISO Certification & Quality Systems

Gap analysis, certification readiness, and quality management for regulated software.


Compliance & Supplier Management

HIPAA, supplier risk, and compliance strategy for regulated organizations.


Implementation & Discovery

Getting technology implementations right from the start.


Industry Perspectives

  • Machine Learning Validation: The Accuracy Problem

    Introduction

    In Part 4A, we covered what machine learning (ML) is, how training data works, and why the shift from deterministic to probabilistic outputs fundamentally changes validation. We ended with a question: what does “accurate” actually mean for ML — and was it ever clearly defined for the processes it replaces?

    ALCOA — Attributable, Legible, Contemporaneous, Original, Accurate — is the bedrock of data integrity in regulated environments. Four of those five principles are process controls you can enforce: you can attribute a record, make it legible, timestamp it, preserve the original. The fifth — Accurate — is different. It’s the one that assumes a relationship to truth. And it’s the one that breaks first when AI enters the picture.

  • Machine Learning: Where Validation Gets Interesting

    This is Part 4 of the AI Foundations Series, which explores how different categories of AI technology map to validation decisions in regulated life sciences environments.

    AI Foundations for Life Sciences — A Taxonomy

    The series so far:

    This article is where the series shifts gear. The topic, Machine Learning (ML), is large enough — and the implications differ enough from rule-based systems — that it naturally splits into two parts. Part 4A covers what ML is, how it works, and why the shift from deterministic to probabilistic changes the validation conversation. Part 4B takes on the accuracy problem — what “accurate” actually means, why it’s harder to define than you’d expect, and what to do about it.

  • Rules-Based Systems: The Baseline for AI Validation

    Introduction

    In the first article, we outlined seven categories of AI and why life sciences organizations need a deliberate strategy before adoption. The second article focused on Retrieval-Augmented Generation (RAG)—a technique that grounds AI responses in verified documents.

    This isn’t a complexity ladder where you start simple and work up to advanced AI. It’s a toolkit. Each category fits different GxP applications. We’re covering them as they apply to real validation challenges, not in textbook order. RAG came first because document generation tools are flooding the market. Rule-based systems come now because they’re the baseline—the simplest category in the taxonomy, and the easiest to validate.

  • RAG: Retrieval-Augmented Generation for Regulated Environments

    In the taxonomy article, we identified hallucination as a predictable failure mode of LLMs — they generate plausible text without inherent mechanisms for truth verification. We mentioned RAG (Retrieval-Augmented Generation) as the most common architectural response to this problem, but didn’t explain the mechanics.

    This article — and the others that follow in this series — flesh out concepts the taxonomy introduced. The term “RAG” will come up in vendor conversations, product evaluations, and audit discussions. It’s important to understand what it is, how it works, where it fails, and what it means for validation. That’s what this article covers.

  • Articles

    AI Foundations for Life Sciences — A Taxonomy

    A reader reached out recently after working through our AI Readiness series. The feedback was direct: the strategic content had value, but applying it was harder than it needed to be. The missing piece wasn’t more frameworks—it was foundational grounding. Definitions. Descriptions. How the pieces relate to each other.

    It’s a fair point. I’ve been writing for an audience I assumed had the same exposure I’ve accumulated—through partnerships with AI systems that included implementation and usage, through client work, and the steady drip of industry noise. That assumption created a gap. This taxonomy exists to map AI implementations to validation and oversight decisions—not to debate definitions.

  • Articles

    You Have More Suppliers Than You Think

    Supplier lifecycle management has always been a bear. The task is critical, regulations require it, audits test it, and validation packages depend on qualified supplier documentation. The issue is, the execution has never had good tooling. You end up with spreadsheets, email chains, annual fire drills, and gaps you discover at the worst possible time.

    This article describes an approach to move from periodic qualification to continuous lifecycle management, leveraging infrastructure that makes it operationally feasible.

  • Before You Can Validate AI, Your Project Needs to Not Fail

    80% of AI projects fail. That’s not a GxP problem—it’s an AI problem. But in life sciences, the stakes are higher, the data messier, and the consequences more serious. The industry’s unique complexity—and its regulatory and compliance burden—turn common failure modes into structural risks. This article explores the patterns, examines the problems, and suggests where the real value lies.

    AI is making significant contributions across many disciplines, including life sciences. There are too many positive indications to think otherwise. It’s this potential that makes addressing the issues identified here so important.

  • Intended Use: Foundation for Risk-Based Validation

    Executive Summary

    Risk-based validation is a cornerstone of the FDA’s recent CSA Final Guidance. It depends on an accurate Intended Business Use statement—or more simply, intended use. Before you can identify Critical Data Elements (CDE), create an objective risk assessment, or determine validation testing scope, a clear intended use statement is essential.

    For vendors, comprehensive intended use statements tied to URS, baseline configuration, and OQ testing enable clients to leverage validation work and reduce implementation time by months.

  • How a Dedicated LMS Ensures Your QMS is Effective

    The Question That Defines Your QMS

    It’s not uncommon during an audit to be asked very specific questions about training records and system access: “Show me that only qualified personnel accessed your batch record system during this manufacturing run.”

    You have training records showing employees completed GMP training. You have system access logs showing who logged into the manufacturing execution system. But can you prove the connection?

    The auditor wants to see that training records and system access align perfectly - that every person who touched that batch was qualified at the time they performed the work. This isn’t a training question. It’s a quality system control question. Your LMS is supposed to provide this evidence - the documented proof that your Quality Management System isn’t just written procedures in a binder but an operational reality that prevents unqualified personnel from affecting product quality.

  • Articles

    Part 1: Bridging the Gap: How Software Vendors Can Leverage CSA to Streamline Client Validation

    Executive Summary

    The FDA’s new Computer Software Assurance guidance creates a strategic opportunity for GxP software vendors who understand how to make their validation work leverageable by clients. This article explains why multi-tenant SaaS demands new validation approaches, how intended use and Critical Data Elements form the foundation for client validation efficiency, and why vendors who get this right become preferred suppliers. Part 1 covers the foundational framework; Part 2 addresses validation artifacts and competitive implementation.

  • AI Validation Isn’t One-Size-Fits-All

    Executive Summary

    Not all AI is created equal — and the path to responsible adoption begins with knowing the difference.

    One of the most important distinctions in regulated AI is between non-adaptive (static) systems and adaptive (learning) systems. Non-adaptive systems, once trained, do not change behavior without revalidation. Adaptive systems evolve as they interact with new data — introducing new risks, requirements, and regulatory uncertainty.

    For most life sciences organizations, non-adaptive AI is the right place to start. It enables value today while building the governance infrastructure necessary for adaptive systems tomorrow. It’s also the class of AI most aligned with current regulatory comfort levels on both sides of the Atlantic.

  • Part 5: Moving from AI Strategy to Implementation

    Executive Summary

    This final article in our AI Strategy series bridges the gap between thoughtful planning and practical execution. It builds on the frameworks introduced in Parts 1–4—strategy rationale, organizational alignment, governance models, and adaptive validation—and turns the focus toward operational readiness.

    To recap:

    • Part 1 framed the strategic necessity of AI, urging life sciences organizations to shift from reactive adoption to deliberate integration.
    • Part 2 addressed misalignment between company roles, risk appetites, and AI roadmaps, emphasizing fit-for-purpose strategies.
    • Part 3 introduced governance models for AI oversight, cautioning against performative structures and stressing clarity in ownership.
    • Part 4 reimagined validation for dynamic systems, offering a pathway to evolve traditional CSV methods without sacrificing rigor.

    Now, in Part 5, we examine the messy, human, operational terrain where AI strategies succeed or stall. We explore what early implementation requires—planning, readiness assessments, handoff structures, and cultural integration.

  • Part 4 – Adapting Validation for AI

    Executive Summary

    This is Part 4 in our series on pragmatic AI implementation in life sciences. We’ve established why organizations need deliberate AI strategies (Part 1), how those strategies must align with organizational role and risk tolerance (Part 2), and what governance frameworks can handle AI’s post-deployment evolution (Part 3).

    Part 4 addresses the operational challenge of validating AI systems that evolve after deployment. Key insights: Traditional CSV/CSA frameworks provide the foundation but require extension for statistical evidence and continuous monitoring. Organizations should strengthen validation capabilities for current static AI before tackling adaptive systems. Success depends on building cross-functional expertise, evolving risk management approaches, and investing in tools that support continuous validation. The goal is to enable AI innovation while maintaining compliance standards through progressive capability building.

  • Implementation Readiness: A New Foundation for GxP System Success

    In a recent article, I argued that a structured discovery phase—what we now frame as readiness—is essential for success in regulated system implementations. In this context, readiness refers to the coordinated set of activities—implementation planning, validation setup, user training, data preparation, and change management—that together enable a system to go live in a controlled, auditable, and business-aligned state. Done well, readiness defines scope, clarifies roles, and aligns expectations.

    But in today’s life sciences landscape, that’s no longer enough.

  • Articles

    Part 1: The Strategic Imperative: Why Life Sciences Organizations Need a Deliberate AI Strategy

    Originally published 08 April 2025. Updated May 2025 in light of recent FDA announcements expanding the agency’s use of AI in regulatory operations.


    Introduction

    AI is no longer an experimental add-on in life sciences — it’s becoming infrastructure. On May 8, 2025, the FDA issued a public statement confirming that AI-assisted scientific review tools will now be used across all centers, with full deployment expected by June 30.¹

    This shift makes one thing clear: organizations that fail to define and control their own use of AI — from model behavior to oversight structures — will be reacting to regulators rather than leading with strategic clarity.

  • Part 3: Governance Models for AI in Regulated Teams

    This is the third installment in our series on AI strategy in life sciences.

    “Part 1: The Strategic Imperative: Why Life Sciences Organizations Need a Deliberate AI Strategy,” argued that AI is a strategic imperative.

    “Part 2: Fit for Purpose: Aligning AI Strategy to Your Company,” explored how to tailor AI strategy to your organization’s structure, function, and risk exposure.

    This article focuses on how to operationalize that strategy — through practical, risk-aligned governance.

  • Part 2: Fit for Purpose: Aligning AI Strategy to Your Company

    This article is the second in a series on AI strategy in life sciences. It follows “The Strategic Imperative: Why Life Sciences Organizations Need a Deliberate AI Strategy,” which outlined why AI cannot be adopted ad hoc in regulated environments. This article focuses on tailoring your strategy based on what kind of company you are — and what kind of AI you use.


    Introduction: The Myth of the Single AI Strategy

    If you’re looking to “develop an AI strategy,” you’re already ahead of many. But here’s the reality: most life sciences organizations don’t need one AI strategy — they need several.

  • Articles

    A Hidden Barrier to CSA Implementation: Why Digital Validation Is the Missing Link

    The FDA’s Computer Software Assurance (CSA) guidance, released in 2022, introduced a welcome shift in validation strategy: risk-based, right-sized assurance that prioritizes fitness for intended use over exhaustive documentation.

    But two years later, most life sciences organizations still struggle to realize its benefits. The promise of faster, smarter, and more focused validation remains just that—a promise.

    The issue isn’t intent. It’s infrastructure. Too many teams are trying to implement CSA using traditional CSV tools and workflows.

  • Evolving Validation for AI-Enhanced Systems

    As life sciences organizations begin integrating AI capabilities into regulated systems and processes, I’ve been contemplating how our validation approaches might need to evolve. This is an ongoing process, and the following represents my current perspective, recognizing that the field is rapidly developing and many perspectives will shape its future.

    Why This Matters

    At its core, our work in life sciences validation serves a singular purpose: ensuring that the technologies and processes we develop and implement genuinely improve people’s lives and protect them from harm. As AI capabilities expand across software systems, manufacturing processes, medical devices, and beyond, this mission becomes more promising and more challenging. The potential to accelerate research, improve diagnoses, enhance manufacturing quality, and personalize treatments offers tremendous benefits to patients, but only if we can confidently validate these systems.

  • Articles

    The Hidden Costs of Manual Validation in Life Sciences

    The Moment Every Validation Professional Prepares For

    “Can you show me the traceability for that requirement across all your systems?”

    Has an auditor ever asked you this seemingly straightforward question, only to trigger an immediate sinking feeling in your stomach? You know the validation was performed. You know the documentation exists. But in that moment, can you confidently navigate to exactly what’s needed without hours of searching through binders, network drives, and SharePoint sites?

  • Risk-Based Validation in Cloud-Based, Multi-Tenant GxP Systems

    Executive Summary:

    • Multi-tenant cloud systems benefit from a new validation approach as vendors face increasing pressure to minimize client validation burden
    • The Computer Software Assurance (CSA) guidance provides the regulatory framework to implement risk-based validation that balances vendor and client responsibilities with critical thinking and pragmatism
    • Vendors should establish Core Intended Use Statements and validate default configurations (IQ/OQ)
    • Clients should focus validation (PQ/UAT) on specific configurations and business processes that differ from the vendor’s default
    • This strategic alignment reduces implementation time (time to go-live), prevents redundant testing, and creates sustainable validation processes even with frequent cloud updates that Agile development enables
    • Future validation approaches will increasingly leverage AI and automation to maintain compliance in rapidly evolving systems

    Validation Challenges of Multi-Tenant Cloud Systems

    The first article in this series, From Risk-Based Monitoring to Risk-Based Validation, explored how life sciences organizations—in a nod to their ClinOps colleagues’ transition to risk-based monitoring (RBM) and other trial management changes, landing on Risk-based Quality Management (RBQM)—are shifting from comprehensive and homogenous validation practices to pragmatic, risk-based approaches within Computer System Validation (CSV).

  • Articles

    From Risk-Based Monitoring to Risk-Based Validation

    In 2012, clinical operations teams faced a stark reality: 30% of trial budgets were routinely consumed by site monitoring. It was demonstrated that a prime on-site activity, Source Document Verification (SDV), yielded minimal improvements in data quality. Clinical Data Managers (CDMs) bore the burden of ensuring data integrity, while Clinical Research Associates (CRAs) spent countless hours in transit and on-site manually verifying data listings. The revelation that SDV contributed minimally to data accuracy while consuming substantial resources sparked a transformative shift—Risk-Based Monitoring (RBM). With FDA endorsement, organizations learned to focus monitoring efforts where they mattered most, prioritizing critical data and high-risk sites rather than verifying every data point equally.

  • Articles

    HIPAA Compliance for SaaS Systems: PHI Risk, ISO Alignment, and Practical Steps Forward

    Introduction: HIPAA & the Challenge of Managing PHI in SaaS

    The Health Insurance Portability and Accountability Act (HIPAA) establishes security and privacy requirements for handling Protected Health Information (PHI) in the United States.

    📌 What is PHI? PHI (Protected Health Information) refers to any individually identifiable health data that is created, received, stored, or transmitted by a healthcare provider, insurer, or business associate. This includes:

    • Patient names, addresses, or contact details
    • Medical records, diagnoses, or treatment histories
    • Health insurance information, billing details, or payment records
    • Biometric identifiers (fingerprints, retinal scans, etc.)
    • Any other data that can be linked to an individual’s health status

    While HIPAA applies directly to healthcare providers, insurers, and clearinghouses (“Covered Entities”), it also extends to third-party vendors (“Business Associates”) that process, store, or transmit PHI on behalf of Covered Entities.

  • Compliance for GxP Life Sciences Software—A Strategic Advantage

    The HIPAA Challenge for Life Sciences Software

    If you’re developing GxP-compliant software for pharma, biotech, or clinical research, you’ve likely encountered the question: Is your system HIPAA-compliant?

    For software vendors already operating under ISO 27001, ISO 9001, and Computer System Validation (CSV) frameworks, HIPAA compliance may seem like an entirely separate burden. In reality, many of the security, data integrity, and validation controls needed for HIPAA already exist in your current processes.

  • How AI Enhances (but doesn't replace) Human Expertise in GxP Regulated Processes

    Introduction

    Many in life sciences are asking the same question: “Will AI eventually take over human roles?” (Ref1, Ref2) It’s a fair concern, especially in fields like ours where patient safety, accuracy, and strict compliance are non-negotiable. But while AI offers significant transformative potential, at present, its greatest strength lies in augmenting, not replacing, human expertise.

    In regulated environments, AI is best used as a powerful assistant, taking on repetitive tasks and processing data in real time so we can focus on tasks that requires human judgment. Think of AI as a tool that boosts consistency and efficiency while freeing up professionals to apply their knowledge where it matters most. The theme here is simple but important: AI can handle the “heavy lifting” in data and compliance work, but it’s not a substitute for the nuanced, high-stakes decisions that allow life sciences to add value to people’s lives.

  • ISO 27001 Gap Analysis: Implementing Solutions and Tools to Address Identified Gaps

    In our previous article, “ISO 27001 Gap Analysis: Managing and Addressing the Results,” we outlined how categorizing gaps into specific areas can streamline your path to compliance. In this article, we’ll guide you through strategies for closing these gaps, prioritizing risks, and implementing tailored tools to support your organization’s unique needs and ensure long-term compliance.

    At Driftpin, we specialize in helping organizations not only navigate their gap analysis results but implement practical, tailored solutions that support long-term compliance with ISO 27001 standards. Part of this involves integrating these gaps into your risk management processes—a critical step for tracking, mitigating, and maintaining compliance.

  • The Importance of a Discovery Phase in Clinical Technology Implementations

    Introduction

    A key to the success of a technical system implementation is a step often referred to as a Discovery Phase. This is especially true when planning to implement a cloud-based GxP system. This structured preparatory step helps define scope, business and user requirements, configuration settings, validation strategy, and workflow processes while addressing specific challenges unique to regulated environments like pharma, biotech, and CROs. In our experience, the discovery phase is the foundation for a smooth and compliant implementation, significantly reducing the risks that typically arise later in the project.

  • Foundations of Audit Trails in GxP Software Systems: Ensuring Data Integrity and Compliance in Clinical Environments

    Introduction

    Audit trails can be seen as essential for maintaining data integrity, regulatory compliance, and patient safety in the life sciences. As Good Practice (GxP) regulations evolve, pharmaceutical and biotech companies and clinical technology providers are encouraged to ensure that all changes to electronic records are tracked and auditable. This is often crucial to comply with standards like FDA’s 21 CFR Part 11, EU Annex 11, ISO 9001/27001, and GAMP 5 guidelines.

  • Introduction to Installation Qualification (IQ) in GxP-Regulated Cloud-Based SaaS Systems

    Introduction

    As life sciences organizations increasingly transition to cloud-based, multi-tenant SaaS systems, ensuring compliance with regulatory frameworks like GxP (Good Practice guidelines) becomes critical. In this context, Installation Qualification (IQ) plays a key role in validating these systems and ensuring that they are installed properly, function as intended, and meet regulatory requirements.

    This article provides an introduction to IQ in cloud-based SaaS platforms and explains how IQ adapts to client-specific needs. We will define key terms like tenant, environment, and instance, and address how data security is managed between tenants. We’ll also cover how SaaS providers can serve clients with both GxP and non-GxP systems while maintaining distinct validation paths. Finally, we’ll discuss how IQs are developed, tested, verified, and rolled back in case of errors.

  • The Impact and Continued Relevance of Pragmatic Clinical Trials

    The Impact and Continued Relevance of Pragmatic Clinical Trials

    Pragmatic Clinical Trials (PCTs) were introduced to address the gap between randomized clinical trials (RCT) and real-world clinical practice. A key document, likely “The Pragmatic Clinical Trials” report produced several years ago, laid the foundation for this approach, emphasizing trials designed to be more inclusive, practical, and reflective of routine clinical settings.

    Key Impacts of Pragmatic Clinical Trials

    1. Increased Real-World Applicability:

  • How ISO 9001 Unlocks Streamlined GxP Software Development for Life Sciences

    Introduction

    For life sciences companies developing software in GxP-regulated environments, compliance is paramount. But ensuring that your software products are functional and validatable—capable of meeting rigorous regulatory standards—requires more than just great code. It demands a well-structured approach to quality management, software development, and system validation.

    Enter ISO 9001: The international quality management system (QMS) standard. ISO 9001 (link) provides a framework for improving quality and serves as the foundational blueprint for aligning your software development processes with the stringent requirements of GxP compliance.

  • Articles

    Why ISO 27001 Certification is Critical for GxP Software Providers

    Who should read this article?

    This article is important for anyone contemplating ISO 27001 certification, but it is essential reading for GxP software providers, regulated software manufacturers, and SaaS developers who create solutions for life sciences organizations that must adhere to stringent regulatory frameworks like 21 CFR Part 11. It is also highly relevant for companies involved in developing compliant, validated software for pharmaceutical, biotech, clinical trials, and medical device industries. Readers from quality assurance, compliance, and IT teams working to implement or maintain ISO 27001 certification will gain insights into how this standard strengthens data security, regulatory compliance, and client trust in GxP environments.

  • ISO 27001 Gap Analysis: Managing and Addressing the Results

    We received a question about the best way to organize and attack the results of your ISO 27001 Gap Analysis.

    The best way to efficiently and effectively address your identified gaps is by categorizing them into the following buckets. By categorizing the results of an ISO 27001 gap analysis into these buckets, organizations can systematically address each area, prioritize actions, and ensure a comprehensive approach to achieving compliance.

    Please refer to our Substack article that describes the most common categories and issues.

  • Determining your organization's Company Context

    Background

    What is Company Context?

    The term “company context” refers to the combination of internal and external factors and conditions that affect an organization’s approach to its operations, strategy, and objectives. It encompasses a wide range of influences that shape and define the environment in which an organization operates, including internal elements within the organization itself and the external forces in the broader environment.

    Company Context is a critical component of ISO certification. Describing the organization’s company context is integral to satisfying several ISO standards. While the term is not exclusive to ISO (see ITIL and PMBOK for two examples), it is notably, for our purposes, called out explicitly in ISO 9001 and 27001 (Quality Management and Information Security, respectively).

  • Preparing for ISO 9001 Certification: A Risk-based Approach

    Life science technology companies—for example, those that build GxP software systems for use by Clinical R&D teams at pharmas and biotechs or at CROs—can benefit from becoming ISO certified. A formally established Quality Management System that complies with ISO 9001 can yield multiple benefits to companies in the GxP space.

    The ISO 9001 certification process is typically a months-long project, which can be time-consuming, costly, and resource-intensive. Without proper preparation and assistance, it has the potential to negatively impact your operations.

  • Articles

    Computer System Validation - Overview

    Purpose of Validation

    Introduction

    This article is intended to provide an overview of the reasons why validation is a necessary and critical part of software development, implementation, and maintenance in the life sciences. It reviews certain key drivers supporting the significant cost of validation and identifies some of the challenges that software manufacturers and system implementers/users face in achieving validation.

    This is a complex and important topic. In addition (and because it is complex), regulatory bodies (FDA, EMEA, etc.) have issued guidances about validation that are decidedly non-prescriptive in detailing how to achieve validation. This is mainly because there are so many variations in computer systems, and the capabilities of systems advance so quickly that a guidance that was proscriptive would be out of date by the time it was approved. Because there are so many grey areas, there will be a number of other articles to follow that explore how best to navigate.

  • Using Clinical Technology...but in a good way

    But before we get to that

    A bit about me: I’ve been involved with life sciences technology for more than 20 years. I started writing about how to use the systems, then got into testing, determining how a system should look and feel, training users on how to use it…and so on. Along the way, I learned on the job and via coursework about software development, database administration, clinical trials, QC, QA, user interfaces, user experience, and software validation.