What You Need to Know about the FDA’s Draft Guidance on AI to Support Regulatory Decision Making

What You Need to Know about the FDA’s Draft Guidance on AI to Support Regulatory Decision Making

On January 6, 2025, the FDA issued draft guidance titled Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products. The announcement recognizes that artificial intelligence (AI) is becoming more prominent in drug development and discovery, and establishes a risk-based credibility framework that can be used to evaluate whether an AI model is trustworthy and reliable for a specific job or situation. 

For instance, if doctors plan to use AI to diagnose diseases, the AI model needs to be tested first, to ensure it provides accurate and consistent results. More specifically, the guidance suggests evaluating how the AI was trained, how well it performs in real-world situations, and whether it follows strict safety and ethical guidelines. 

While this draft guidance might seem premature, AI is already being used in various ways to accelerate drug development and enhance patient care. This is exciting, but it also presents challenges. Research shows that some AI models have built-in biases which can raise questions about the accuracy of AI-driven results.  Furthermore, AI models are increasingly complex, making it difficult to determine exactly how they generate results. 

What the new draft guidance says

The FDA’s draft guidance aims to address some of these concerns and assist sponsors in their use of AI for regulatory decision-making, by focusing on three key areas:

  • Creating a risk-based credibility assessment framework for using AI in the drug product life cycle
  • Maintaining the credibility of AI  model outputs through their lifecycle in certain contexts of use
  • Engaging with the FDA early, especially when sponsors are unsure whether tier AI use falls under this guidance.

This guidance does not apply to using AI for drug discovery, improved operational efficiencies, or AI not related to regulatory decision-making.

A risk-based credibility assessment framework

In its guidance, the FDA lays out a seven-step process to assess the credibility of AI models. The steps include:

  1. Define the question of interest that will be addressed by the AI model.
  2. Define the context of use (COU) for the AI model.
  3. Assess the AI model risk.
  4. Develop a plan to establish the credibility of the AI model output within the COU.
  5. Execute the plan.
  6. Document the results of the credibility assessment plan and discuss deviations from the plan.
  7. Determine the adequacy of the AI model for the COU.

The draft guidance also provides recommendations on life cycle maintenance, which refers to the continual assessment of the AI model to ensure its performance and sustainability throughout its use for the COU. This ongoing evaluation is crucial because continuous monitoring increases the likelihood of identifying potential issues, allowing for timely retraining or adjustments.

Share your thoughts with the FDA

Since AI is becoming ubiquitous and its use will have lasting impacts on our industry, the FDA has asked for public feedback through a request for comments. The deadline to share your thoughts is April 7, 2025. Notably, the FDA wants to know how well the draft guidance aligns with industry experience and whether the options available to sponsors to engage with the FDA on the use of AI are sufficient. The agency will take all public comments into account before finalizing its guidance.

Submit your comments to the FDA here.

Get Started Today

Discover how Harbor Clinical can assist your company.

Categories

Categories