Blog
As Artificial intelligence (AI) continues to grow, the health care industry is beginning to explore the benefits it can bring. With the potential to advance medical product development, improve patient care, and augment the capabilities of health care practitioners. The US Food and Drug Administration’s (FDA’s) Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), Center for Devices and Radiological Health (CDRH), and Office of Combination Products (OCP) are jointly collaborating to safeguard public health while fostering responsible and ethical innovation medical devices and pharmaceuticals.
AI management requires a risk-based regulatory framework built on robust principles, standards, and best practices. With the use of state-of-the-art regulatory science tools the risk-based framework can be applied across AI applications and be tailored to the relevant medical product. Do to the complex and dynamic processes involved in the development, deployment, use, and maintenance of AI technologies. They benefit from careful end-to-end management of AI applications throughout the product life cycle. The process starts from ideation and design and progresses through data acquisition; preparation; model development and evaluation; deployment; monitoring; and maintenance. This approach can help address ongoing model performance, risk management, and regulatory compliance of AI systems in real-world applications.
The US FDA CBER, CDER, CDRH, and OCP divisions have identified four areas of focus regarding the development and use of AI across the product life cycle to help meet the FDA GMP guidelines that are already established.
The Focus Areas
- Foster Collaboration to Safeguard Public Health – Cultivate a patient-centered regulatory approach that emphasizes collaboration and health equity.
- Collect input from interested parties to consider critical aspects such as transparency, governance, bias, cybersecurity, and quality assurance.
- Promote the development of educational initiatives to support regulatory bodies, health care professionals, patients, and researchers to ensure safe and responsible use of AI in medical product development.
- Work closely with global collaborators to promote international cooperation on standards, guidelines, and best practices to encourage global consistency.
- Advance the Development of Regulatory Approaches That Support Innovation – FDA intends to develop policies that provide regulatory predictability and clarity for the use of AI.
- Monitor and evaluate trends and emerging issues to detect potential knowledge gaps and opportunities in the current FDA guidelines.
- Supporting efforts for evaluating AI algorithms for robustness and resilience against current FDA regulations.
- Build upon existing initiatives for the evaluation and regulation of AI use in medical product development, including in manufacturing.
- Issuing guidance regarding the use of AI in medical product development and in medical products.
- Promote the Development of Standards, Guidelines, Best Practices, and Tools for the Medical Product Life Cycle. – Upholding safety and effectiveness standards across AI-enabled medical products. As well as building on Good Machine Learning Practice Guiding Principles.
- Refine and develop considerations for evaluating the safe, responsible, and ethical use of AI in the medical product life cycle.
- Identify and promote best practices for long-term safety and real-world performance monitoring.
- Best practices for documenting and ensuring that data used to train and test AI models are fit for use.
- Develop a framework and strategy for quality assurance of AI-enabled tools or system.
- Support Research Related to the Evaluation and Monitoring of AI Performance. – To gain valuable insights into AI’s impact on medical product safety and effectiveness.
- Identify projects that highlight different points where bias can be introduced in the AI development life cycle and how it can be addressed.
- Support projects that consider health inequities associated with the use of AI to promote equity and ensure data representativeness, leveraging ongoing diversity, equity, and inclusion efforts.
- Support the ongoing monitoring of AI tools in medical product development within demonstration projects to ensure adherence to standards and maintain performance and reliability.
CBER, CDER, CDRH and OCP plan to tailor their regulatory approaches for the use of AI in medical products to protect patients and health care workers and ensure the cybersecurity of medical products in a manner that promotes innovation.