Considerations for the Use of AI in Regulatory Decision-Making for Drugs and Biological Products
Introduction
Artificial intelligence (AI) is increasingly being integrated into the drug development and regulatory landscape, offering potential advancements in efficiency, accuracy, and innovation. However, its use in regulatory decision-making presents unique challenges and necessitates a structured approach to ensure credibility, safety, and compliance. The U.S. Food and Drug Administration (FDA) has issued draft guidance to provide a framework for sponsors and other stakeholders using AI-generated data or insights in regulatory submissions. This post summarizes the key takeaways from the FDA’s guidance and outlines best practices for establishing AI credibility.
Scope of the Guidance
The FDA’s guidance focuses on the application of AI in generating data or insights that support regulatory decision-making related to drug safety, efficacy, and quality. Importantly, it does not cover AI applications in drug discovery or operational efficiencies such as workflow automation. The guidance provides a risk-based credibility assessment framework designed to help sponsors validate and document AI model outputs in regulatory submissions.
Background
AI’s role in drug development has expanded significantly, with applications ranging from reducing animal testing in preclinical studies to integrating real-world data for regulatory submissions. However, these advancements come with challenges, including:
To address these challenges, the FDA recommends a structured credibility assessment framework.
A Risk-Based Credibility Assessment Framework
The FDA outlines a seven-step process for assessing and validating AI models used in regulatory decision-making. The level of oversight and documentation required depends on the model's risk level.
Step 1: Define the Question of Interest
The first step involves specifying the exact question that the AI model aims to answer. For example:
Step 2: Define the Context of Use (COU)
The COU clarifies the AI model’s role and how its outputs will be used. It should include:
For example, an AI model predicting patient risk for adverse reactions should specify whether it is the sole determinant or used alongside traditional clinical assessments.
Step 3: Assess the AI Model Risk
Model risk is assessed based on two factors:
A high-risk model, such as one determining patient safety measures, requires more stringent credibility assessment than a low-risk model used as a secondary verification tool in manufacturing.
Step 4: Develop a Credibility Assessment Plan
Sponsors should develop a credibility assessment plan that includes:
The plan should be tailored to the model’s risk level, with high-risk applications requiring more detailed validation and transparency.
Step 5: Execute the Plan
The plan should be implemented, with real-time validation and documentation of AI performance. Engaging with the FDA early can help refine execution strategies and ensure regulatory expectations are met.
Step 6: Document Results and Deviations
Sponsors must document:
This documentation may be included in regulatory submissions or retained for inspections.
Step 7: Determine Model Adequacy
If the AI model does not meet the required credibility standards, sponsors may:
Special Considerations: Lifecycle Maintenance
AI models may evolve over time due to new data inputs and environmental shifts. Sponsors should implement lifecycle maintenance strategies, including:
In pharmaceutical manufacturing, for example, lifecycle maintenance ensures that AI-based quality control systems remain accurate despite process changes.
Early FDA Engagement
Sponsors are encouraged to engage with the FDA early in AI model development. Various engagement pathways exist, depending on the AI model’s application, including:
Early engagement helps clarify regulatory expectations and refine credibility assessment plans before formal submission.
Conclusion
AI has the potential to revolutionize drug development and regulatory processes, but its credibility must be established through rigorous validation, risk assessment, and lifecycle monitoring. The FDA’s draft guidance provides a structured framework to ensure AI models meet regulatory standards while fostering innovation in the field.
By adhering to this risk-based framework, sponsors can confidently integrate AI into regulatory decision-making, ultimately enhancing drug safety, effectiveness, and quality.