EFPIA Statement on the use of AI in the medicinal product lifecycle in the context of the AI Act
EFPIA believes in the potential of applying AI to deliver benefit for patients, life-science companies and society. It will have an ever more critical role in the research, development and manufacturing of medicinal products, meaning we can discover, develop and deliver new, safer, more effective treatments to patients faster than ever before.
It is critical that the regulatory frameworks governing the use of AI in research, development and manufacture must be fit-for-purpose, risk-based, non-duplicative, globally aligned, and adequately tailored. This would ensure that rules enable, rather than hinder, the development of safe and effective treatments that reach patients faster and more efficiently.
Medicine development may be facilitated by the use of a range of methodologies and drug development tools including those incorporating AI. When used solely for the purpose of medicines R&D, such AI enabled tools are exempt from the requirements of the EU AI Act. In the event this exemption is not considered to apply, the majority of these tools would in any case not be considered high risk under the AI act and thus not subject to CE marking.
The development and authorisation of safe and effective medicines is already governed by a well-established regulatory framework, which includes laws, guidance documents and other policies.
EFPIA supports the European Medicines Regulatory Network’s (EMRN) considered approach to AI, which builds on existing methods, good research practices, and requirements applied to other drug development tools (such as traditional statistical methods and approaches including model informed drug development). EFPIA looks forward to collaborating with the EMRN on upcoming guidance for the use of AI in medicines development.
EFPIA believes the following five considerations are critical for the use and governance of AI across the medicine development lifecycle:
1. The EU AI Act exemption for AI dedicated to scientific research
The EU AI Act supports innovation and freedom of science, and should not undermine research and development activity. This is why AI systems and models specifically developed and put into service for the sole purpose of scientific research and development are excluded from its scope (as described in Recital 25, Articles 2.6 and 2.8). EFPIA considers that this exemption applies to AI-based drug development tools used in the research and development of medicines because the sole use of these tools is in the R&D of medicines development.
2. The majority of AI uses in the development of medicines cannot qualify as high-risk AI under the current EU AI Act
If the exemption were not to apply, it is important to note that the majority of uses of AI in medicine research and development typically involves AI enabled software that is not regulated under any of the legal frameworks outlined in Annex I (including those for medical devices) nor are they featured under Annex III high risk uses. Therefore, they cannot legally qualify as high risk under the AI Act.
3. The area of medicines development is already a highly regulated space in Europe
Medicines development in Europe is a highly regulated space which ensures the development and approval of safe and effective medicines, including many which employ innovative technologies. We believe that these existing EU legal frameworks, in addition to other regulatory frameworks and policies for medicines set standards to ensure a high level of public health protection. They facilitate access to medicines and their use, while at the same time encouraging innovation. They are sufficiently flexible to create the right foundation to include AI uses in the development of medicines.
4. Upcoming EMA guidance on the use of AI in medicines development lifecycle will provide a new layer of AI oversight to complement the existing regulatory and legislative landscape for medicines
EFPIA welcomes the EMA’s proactive, risk-based approach to assessing the use of AI in medicines through their consultation on a reflection paper on AI and multi-year AI workplan. This includes plans to draft guidance on the use of AI in the medicine lifecycle in 2024. We believe that this upcoming AI guidance, that factors in potential risks associated with use of AI, in conjunction with established, well-functioning legislative and regulatory frameworks for medicines, will ensure appropriate governance of AI use in the development of medicines.
5. The ultimate goal for governance of AI should be fit-for-purpose, risk-based guidance for oversight which is calibrated to the regulatory status and context of use
Traditional policy instruments, such as legislation and guidance, may struggle to keep pace with the rapid advancements in highly innovative technologies like AI. For this reason, the pharmaceutical industry needs dynamic, flexible, future-proof guidance which takes into account the specifics of intended uses and context, and includes appropriate human oversight. Distinctions must be made based on the role the AI plays, the stage of development it is utilised in, the impact on the benefit-risk evaluation of a medicine or associated regulatory decision making, as well as the level of human oversight and control over decision making processes.
The AI policy landscape is evolving in Europe, including the finalisation of the EU AI Act, and the work of the European Medicines Regulatory Network (EMRN), adapting to the increasing use of AI-based drug development tools by developing guidance and provisions for oversight.
EFPIA members look forward to continuing to work together with the European Commission, EMA, the broader European Regulatory Medicines Network (ERMN), patient groups and other stakeholders in the healthcare space to ensure we unlock the potential of AI, while ensuring its adherence to fundamental rights, safety, and ethical principles at same time.
Background
The final text of the AI Act sets out a horizontal framework which will apply to all AI systems and models placed on the EU market, with specific requirements depending on the intended use case of the system or model. It does not address how AI can be applied at the sectoral level. In order to get greater clarity on the use and governance of AI across the medicine development lifecycle, EFPIA has engaged with the European Medicines Agency (EMA) in the context of its artificial intelligence work plan.
The AI Act takes a risk-based approach and classifies AI systems into four risk categories – prohibited, high, limited, and minimal risk. The regulation does not apply to AI systems and models, including their output, developed and put into service for the sole purpose of scientific research and development.
References:
1. EMA/HMA Reflection paper on the use of artificial intelligence in the lifecycle of medicines, 19 July 2023; https://www.ema.europa.eu/en/news/reflection-paper-use-artificial-intelligence-lifecycle-medicines
2. EMA/HMA multi-annual AI workplan 2023-2028, 18 December 2023; https://www.ema.europa.eu/en/news/artificial-intelligence-workplan-guide-use-ai-medicines-regulation