In a digital era where technology is rapidly evolving, ensuring responsible and ethical use of artificial intelligence (AI) has become paramount. The recent approval of the AI Act in the European Union (EU) marks a significant milestone in regulating AI systems, impacting various sectors, including healthcare. As we delve into the intricacies of this legislation, it’s essential to understand its implications, particularly in the healthcare context, where AI holds immense promise and potential.[1]
The AI Act: A Brief Overview
The AI Act, recently approved by the EU on 13 March 2024, aims to regulate AI systems designed, developed, and deployed within the Union to ensure they meet high safety, reliability, and ethical conduct standards. Adopting a risk-based approach, the AI Act delineates various categories of AI systems based on their potential risks, with stricter requirements for those considered high-risk. This comprehensive framework addresses crucial aspects such as transparency, accountability, data governance, and human oversight, reflecting a commitment to harnessing AI for societal benefit while mitigating potential harms.[2]
Healthcare in the AI Era
Healthcare benefit significantly from AI advancements, revolutionizing diagnostics, personalized treatment plans, patient care, and administrative processes.[3] From predictive analytics to robotic surgery and drug discovery, AI applications are reshaping the healthcare landscape, promising improved outcomes and efficiency gains. However, the deployment of AI in healthcare also raises complex ethical, legal, and regulatory challenges, necessitating clear guidelines and oversight mechanisms.
Intersecting Regulations: AI Act and Medical Device Regulation (MDR)
The intersection between the AI Act and the Medical Device Regulation (MDR) has been widely scrutinized by academic and regulatory circles. Both frameworks govern technologies used in healthcare, albeit with distinct focuses and requirements. While the MDR primarily addresses medical devices’ safety and performance, the AI Act extends its scope to encompass broader AI systems, including those integrated into medical devices. In a couple of our publications, Professor Goffin and I delve into the details of these two regulations’ influences on one another.[4] However, given the recent approval of the AI Regulation, such aspects pertaining to the intertwining with the MDR have not been object of Institutional attention yet.
Generative AI Regulation: A New Frontier
One of the most contentious aspects of the AI Act is the regulation of generative AI – or general purpose AI model-, which encompasses AI systems capable of creating new content, such as images, text, or even entire works of art.[5] The AI Act defines General purposes AI as those AI ‘that are trained with a large amount of data using self-supervision at scale’, that display ‘significant generality’ and are ‘capable to competently performing a wide range of distinct tasks’ and ‘can be integrated into a variety of downstream systems or applications’. Furthermore, the AI act defines general-purpose AI systems based on a General purpose AI model, which can serve a variety of purposes, both for direct use and for integration in other AI systems”.[6] To these AI systems, the AI Act pose some transparency requirements and copyright protection measures. Different requirements are applied to generative AI that are identified as posing systemic risks. AI systems with “high-impact capabilities” could have significant adverse effects on public health, safety, security, fundamental rights, or society. Therefore, providers of general-purpose AI models must inform the European Commission if their model is trained using a total computing power that exceeds 10^25 FLOPs (i.e., floating-point operations per second). When the model meets this threshold, it will be presumed to be a GPAI model posing systemic risks. In addition to the requirements on transparency and copyright protection, generative AI presenting systemic risks are required to assess and mitigate the risks they pose constantly and to ensure cybersecurity protection. That requires, inter alia, keeping track of, documenting, and reporting serious incidents (e.g., violations of fundamental rights) and implementing corrective measures.
Generative AI in Healthcare
In healthcare, generative AI is already disclosing its potential in virtually all areas of care.[7] To impress an example in the reader’s mind, consider the scenario where generative AI is utilized to generate synthetic medical images for training diagnostic algorithms. While this approach could augment data availability and diversity, ensuring the authenticity and reliability of generated images becomes paramount. The promises of Generative AI in healthcare are accompanied by different and significant risks associated with their use. In this regard, the WHO, in its document “Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models,” has identified some significant areas of concern such as inaccurate, incomplete, biased, or false responses, data quality, and data bias, automation bias, skills degradation and informed consent privacy issues, and degradation of the doctor-patient relationship.
Compliance with regulatory and legal requirements
Despite the enactment of new laws specifically targeting the regulation of AI, it’s essential to recognize that existing legal frameworks, notably data protection laws, product regulation, and international human rights agreements, apply to the development, provision, and deployment of Generative AI. For example, as already explored by the EU Parliament, the GDPR has significant implications for Generative AI concerning data protection and automated decision-making.[8]
In healthcare, understanding how the regulation of Generative AI provided by the AI Act will interact with other existing or forthcoming regulations, such as the MDR and the AI liability directive, is of particular interest.
In particular, liability cases might arise from the use of (generative) AI in healthcare despite the safety requirements enacted by the AI Act. To sort out potential medical liability cases, it is pivotal to understand where the AI Liability directive will stand regarding Generative AI. In particular, while the draft of the directive already provides some answers for “traditional AI”, some questions remain open with regard to the convergence of Generative AI regulation and the AI liability directive exposing healthcare professionals using general-purpose artificial intelligence to potential liability cases. [9]
At the same time, the intersection between the AI Act and the MDR has been explored, allowing to highlight some legal gaps in the scope of the AI regulation and in terms of requirements. This matching exercise has not been done with the “new ” rules that the AI Act presents to regulate Generative AI. In this regard, it is time to explore the role of the MDR in classifying and applying requirements for General purposes of AI. In particular, one central question remains to be tackled in my mind: will the MDR allow us to classify most of the general purposes AI used in healthcare as medical devices, therefore subjecting them to the full spectrum of requirements – under the MDR – and ensure at the same time clinical safety with a throughout evaluation – following the MDR requirements?
Sofia Palmieri, 9 April 2024
[1] The European Parliament, ‘Artificial Intelligence in Healthcare: Applications, Risks, and Ethical and Societal Impacts’ (2022).
[2] Michael Veale and Frederik Zuiderveen Borgesius, ‘Demystifying the Draft EU Artificial Intelligence Act — Analysing the Good, the Bad, and the Unclear Elements of the Proposed Approach’ (2021) 22 Computer Law Review International 97 <https://www.degruyter.com/document/doi/10.9785/cri-2021-220402/html> accessed 8 November 2021.
[3] The European Parliament (n 1).
[4] Sofia Palmieri and Tom Goffin, ‘A Blanket That Leaves the Feet Cold: Exploring the AI Act Safety Framework for Medical AI ’ (2023) 30 European Journal of Health Law 406; Sofia Palmieri, Paulien Walraet and Tom Goffin, ‘Inevitable Influences: AI-Based Medical Devices at the Intersection of Medical Devices Regulation and the Proposal for AI Regulation’ (2021) 28 European Journal of Health Law 341.
[5] Robert Pearl, ‘Will Generative AI Wreck Or Rekindle The Doctor-Patient Relationship?’ (Forbes, 8 May 2023).
[6] Tambiana Madiega, ‘Briefing AI Act’ (2024).
[7] ‘Ethics and Governance of Artificial Intelligence for Health. Guidance on Large Multi-Modal Models’ (2024).
[8] Lagioia Francesca and Sartor Giovanni, ‘The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence’ (2020).
[9] Claudio Novelli and others, ‘Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity’ [2024] SSRN Electronic Journal.