New IMDRF Guidance on AI Medical Device Development

The IMDRF have recently published key concepts and principles for the development of high-quality medical devices which incorporate AI. The guidance further highlights the need for a robust framework that sets out the 10 guiding principles, which outline what should be included to support the responsible development of AI and machine learning-enabled medical devices. These guiding principles set out clearly how developers should begin to develop AI and machine-learning medical devices. They are a useful tool for developers to use as guidance to ensure devices which are being placed on the market are safe, effective and of high-quality.
What you need to know
- Guiding principles have been established to ensure the responsible development of artificial intelligence (AI) and machine learning-enabled medical devices.
- Continuous monitoring and retraining recommendations relating to the device are set out in detail.
- Clinically relevant testing requirements are outlined including the need to simulate real-world conditions.
- Multi-disciplinary expertise is essential as development teams must have knowledge of patient risks, clinical workflows and integration of the device.
- Cybersecurity measures must be implemented to avoid potential risks associated with AI and machine-learning enabled medical devices.
The International Medical Device Regulators Forum (IMDRF) has published guidance titled ‘Good Machine Learning Practice for Medical Device Development: Guiding Principles’. The guidance reviews key concepts and principles for the promotion and development of safe, effective and high-quality medical devices that incorporate the use of AI. The IMDRF sets out 10 guiding principles for the responsible development of AI and Machine-Learning (ML) enabled medical devices that highlight the need for a robust and well-defined framework for the design, development, testing, deployment, and monitoring of medical devices powered by AI and ML:
- Multi-Disciplinary Expertise: Development teams must have a deep understanding of clinical workflows, patient risks, and the integration of the AI/ML device. This doesn’t just include the technical aspects of the model, it also encompasses clinical relevance and usability.
- Good Software Engineering & Security: Robust software engineering principles are foundational. This includes rigorous data quality assurance, strong data management, adequate cybersecurity measures and meticulous documentation of design and risk management decisions. The IMDRF highlights the need to ‘ensure data authenticity and integrity.’
- Representative Data: Data used for training and testing must accurately reflect the intended patient population.
- Independent Training & Test Sets: Training and test datasets must be strictly independent of each other, to prevent overfitting and to ensure realistic evaluation. All sources of dependence must be carefully considered and addressed.
- Best Available Reference Datasets: The chosen reference data should be based on the best available methods to ensure the clinical relevance and well-characterisation of data collected. The limitations of the data must be understood.
- Tailored Model Design: The model design should be appropriate for the available data and should address known risks such as overfitting or security concerns. The design must support mitigation of risks and allow for the safe and effective use of the device.
- Human-AI Team Performance: For models with human interaction, the focus should be on the performance of the entire Human-AI team, not just the AI model in isolation. Human factors and model interpretability are key considerations.
- Clinically Relevant Testing: Testing must simulate real-world clinical conditions with appropriately designed test plans independent of the training datasets. Factors such as the patient population, clinical environment, and interaction of the Human-AI team should be considered.
- Clear User Information: Users must have ready access to clear information about the device, its performance characteristics, the limitations of the model, and how to raise concerns. This information must be appropriately tailored to different user groups like health care providers and patients.
- Ongoing Monitoring & Retraining: Models need continuous monitoring in real-world use to ensure performance and safety. Where retraining is implemented, robust controls are needed to manage risks like overfitting, unintended bias, and performance degradation due to dataset drift.
Conclusion
In this guidance, the IMDRF has highlighted the need for a rigorous and iterative approach to software device development that encompasses diverse expertise, strong software engineering practices, and careful attention to data quality, bias, and clinical relevance.
Looking to the future, the Medical Device Coordination Group (MDCG) should soon be publishing guidance on the interplay between the MDR/IVDR and the EU AI Act. This future guidance is expected to be published in Q2 2025, intended to clarify the regulatory landscape for medical devices incorporating AI.
Read the IMDRF guidance online.
For more information, contact a member of our Product Regulation & Consumer team.
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: