Building trust in AI for drug development: A roadmap for FDA-ready innovation

Christopher McSpiritt2025-06-25 | 12 min read

Return to blog home

Co-authored by: Manish Srivastava, CEO of Pravartan Technologies

Artificial intelligence is no longer just a buzzword in life sciences. It serves as a catalyst for the discovery, development, and delivery of new life-saving therapies. But as its influence grows, so does the responsibility to ensure these systems are ethical, transparent, and aligned with regulatory expectations.

That’s why the U.S. Food and Drug Administration (FDA) recently published draft guidance outlining how sponsors, CROs, and technology providers should use AI in regulatory contexts. Pravartan Technologies distilled these guidelines in a new white paper, offering a framework for life sciences organizations looking to confidently adopt AI while staying in lockstep with FDA expectations.

In this blog, we unpack the key takeaways from the white paper and share Domino’s perspective on how to operationalize AI in a way that is compliant, scalable, and scientifically rigorous.

Why the FDA’s AI guidance is a turning point

The FDA’s draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, isn’t regulation, but it does set the tone for what’s expected. It emphasizes transparency, data quality, lifecycle monitoring, and contextual risk assessment.

“While these recommendations are not binding,” notes Manish Srivastava, CEO of Pravartan Technologies, “they represent the FDA’s evolving thinking and signal what regulators will look for in AI-based submissions. Organizations that treat this as a blueprint for responsible innovation will be better positioned for success.”

The FDA guidance encourages:

  • Transparency in how AI models are designed and validated
  • Reliability and traceability of data used throughout the AI lifecycle
  • Continuous monitoring to detect drift, bias, or performance degradation
  • Contextual risk assessment based on the model’s role and influence

Making AI credible: The FDA’s 7-step model

The FDA outlines a seven-step credibility framework to ensure AI models are trustworthy and fit for use in regulated settings. Key steps include:

  1. Define the question of interest. Clearly articulate what the AI model is intended to support, whether it's predicting risk, identifying trends, or assisting in decision-making.
  2. Clarify the context of use. Determine how the model’s output will be used — standalone or alongside human judgment.
  3. Assess model risk. Evaluate how influential the model is in the decision-making process and the consequences if it gets things wrong.
  4. Create a credibility assessment plan. Outline how the model will be validated, including the data sources, testing methods, performance benchmarks, and strategies for addressing bias or uncertainty.
  5. Execute and validate. Train the model, test it against real-world data, and refine it based on observed performance.
  6. Document and analyze. Capture the full scope of development and validation activities, including performance metrics, deviations from the original plan, and any modifications made along the way.
  7. Assess model suitability. Evaluate whether the model’s performance and reliability are sufficient for its intended role based on evidence gathered. If not, revisit assumptions, add human oversight, or explore alternative approaches.

This structured approach allows sponsors to navigate complexity while demonstrating due diligence. As Srivastava notes, “This process helps organizations not just meet regulatory expectations, but make better, more defensible decisions with AI.”

Domino’s perspective: Engineering for scientific and regulatory rigor

As the FDA calls for structured, risk-based approaches to AI validation and monitoring, Domino offers life sciences organizations a way to operationalize these principles at scale. The FDA’s guidance emphasizes the need for model transparency, auditability, ongoing lifecycle management, and fit-for-purpose use — all areas where Domino’s platform helps.

With Domino, organizations can embed FDA-aligned best practices directly into how AI is developed, tested, and deployed.

  • Reproducibility by default: Every model version, training dataset, and experiment is automatically tracked, enabling full traceability during regulatory review or inspection.
  • Integrated validation workflows: Model development includes pre-defined checkpoints for performance testing, bias evaluation, and documentation, aligning with the FDA’s credibility assessment framework.
  • Continuous lifecycle monitoring: Built-in tools detect performance degradation, data drift, or unexpected behaviors, supporting the FDA’s emphasis on ongoing risk mitigation.
  • Centralized governance: Organizations can define policies for access control, audit logging, and role-based responsibilities to support cross-functional oversight, and ensure FDA compliance.

“The FDA is asking for transparency and traceability across the AI lifecycle, and that’s exactly what Domino was built to provide,” says Chris McSpiritt, Vice President of Life Sciences Strategy at Domino. “We make it easier for life sciences organizations to move fast with AI while satisfying the rigorous demands of regulatory oversight.”

Domino’s platform brings together data scientists, domain experts, and compliance stakeholders in a collaborative environment, enabling them to co-develop AI models that are technically rigorous, contextually appropriate, and audit-ready.

“Organizations don’t just need tools. They need an infrastructure that ensures every AI model is built, tested, and monitored in a compliant and transparent way,” McSpiritt adds. “Domino makes it possible to scale innovation while meeting the FDA’s call for governance and continuous evaluation.”

By bridging data science and regulatory best practices, Domino helps organizations move from experimentation to production-grade AI that can withstand FDA scrutiny and deliver real-world impact.

Best practices from the field

With FDA-ready infrastructure in place, life sciences leaders can now focus on applying AI in ways that deliver measurable impact and stay within regulatory guardrails. Here are three essential best practices to guide the journey from early experimentation to enterprise-wide AI transformation.

1. Start with low-risk, high-value applications

The most effective way to begin is by applying AI to operational areas that offer high business value with minimal regulatory risk. Functions like procurement and vendor management are ideal starting points for AI adoption. For example, AI can automate time-consuming tasks such as generating RFQs, evaluating supplier proposals, or analyzing vendor qualifications. These types of use cases build early momentum, generate ROI, and help teams develop fluency in working with AI systems. Srivastava advises, “Start small, prove value, then scale with confidence. Every successful AI deployment is a building block for broader transformation.”

Once confidence and capabilities are established, organizations can progressively expand into more regulated domains. Areas like predictive modeling for adverse drug reactions, automated clinical trial data review, or quality assurance in pharmaceutical manufacturing are high-impact opportunities. But they also require deeper alignment with FDA guidance, especially around model transparency, risk assessment, and lifecycle monitoring.

2. Build cross-functional AI teams

Successful AI programs depend on collaboration. Data scientists alone can’t ensure a model’s regulatory readiness, and domain experts alone can’t validate model performance. Cross-functional teams, including quality, regulatory, clinical, data science, and compliance, should be involved from the start. Quality and regulatory stakeholders help define the problem, identify risks, and interpret outputs. Data scientists ensure models are methodologically sound, while IT and compliance teams ensure data privacy, traceability, and security controls are in place.

“When you bring domain experts and data scientists into the same workflow, that’s where the magic happens,” explains McSpiritt. “It’s not just about building models, it’s about making sure they solve the right problems in the right way, with the right controls.”

This interdisciplinary approach ensures that AI solutions are not only technically robust but also appropriate for the context in which they’ll be used, a point the FDA underscores throughout its guidance.

3. Prioritize ethical AI and transparency

Ethical considerations should be embedded in AI initiatives from the outset. That means using diverse and representative datasets, minimizing algorithmic bias through testing and validation, and maintaining transparency around how models are trained and how they reach their conclusions. Human oversight remains critical. AI should support and accelerate decision-making, not replace clinical or regulatory judgment.

As AI adoption matures, governance frameworks must evolve accordingly. These frameworks provide the accountability, documentation, and oversight that regulatory bodies like the FDA increasingly expect, especially as AI models become more complex and influential.

To help life sciences organizations build ethical and sustainable AI systems, here is a time-bound action plan that mirrors the FDA’s lifecycle focus:

  • Short term (0-6 months): Identify high-value, low-risk AI use cases and launch pilot projects that demonstrate measurable impact. These early wins build organizational momentum and lay the foundation for structured governance.
  • Mid term (6-18 months): Establish formal internal governance policies, begin proactive engagement with regulatory authorities, and validate AI models using FDA-aligned frameworks.
  • Long term (18+ months): Operationalize AI across regulatory workflows, implement continuous lifecycle monitoring, and adapt governance practices to remain aligned with evolving FDA guidance.

With these steps, organizations can move from ethical intent to operational execution, ensuring that AI remains not only powerful but also safe, auditable, and aligned with industry standards at every stage of adoption.

Operationalize responsible AI today

AI is already reshaping the drug development landscape, but its long-term success depends on the responsible, transparent, and compliant deployment of this technology. Whether you're piloting your first use case or embedding AI into clinical and regulatory workflows, now is the time to ensure your systems are designed for FDA compliance.

“AI will transform the drug lifecycle but only if we build it responsibly. The FDA isn’t a barrier, it’s a guide.”
- Manish Srivastava, CEO of Pravartan Technologies

By combining Pravartan’s regulatory insight with Domino’s Enterprise AI Platform, life sciences organizations can confidently unlock the full potential of AI, accelerating development timelines, improving quality, and ultimately delivering better outcomes for patients.

For a deeper dive, read the full white paper.


As the VP of Life Sciences Strategy at Domino Data Lab, Christopher leads the company’s go-to-market and product strategy for the pharmaceutical industry. He plays a key role in driving the adoption of Domino’s enterprise-scale data science platform, empowering pharmaceutical companies to harness AI, machine learning, and advanced analytics to unlock valuable insights from vast data sets and become more data-driven in their decision-making processes.