EU Artificial Intelligence Act and its impact on biomedical and life sciences businesses

16 Jul, 2024
Quentin Golder
The EU Artificial Intelligence Act (the Act) was passed by the European Parliament on 13 March, and unanimously approved by the EU Council on 21 May, writes Quentin Golder, Partner with law firm Birketts LLP.
Thumbnail
Credit – Depiction Images / Shutterstock.com

It introduces a comprehensive set of rules to regulate the use of artificial intelligence (AI) in the EU. It will almost certainly have a significant impact on the use of AI within the healthcare and life sciences sectors, both within Europe and more broadly.

While the Act does not have force outside of the EU, it will still have significant extraterritorial effect given it may be enforced (inside the EU), against providers, wherever they are in the world, if they place or put into service any AI system within the EU.

Accordingly, any business which develops/supplies AI systems to which the Act applies that wishes to do business within the EU (or supply to others who do), is going to have to ensure they comply with the Act.

The Act imposes obligations on AI system providers based on the degree of risk involved with their systems. There are four categories of AI systems to which the Act applies, graded on their potential risk levels.

Prohibited AI: as the name says, these applications are prohibited (subject to very limited exceptions) and include systems that utilise subliminal techniques, biometric categorisation, social scoring, predictive policing, facial recognition databases, emotional inference or exploit vulnerabilities.

High risk AI: these are systems in key areas (e.g. healthcare, transportation, education among others) where comprehensive obligations are placed on providers and deployers covering governance measures and technical interventions at all stages of the development and deployment process, including CE marking, conformity assessments, provision of technical documentation, human oversight and cybersecurity.

Limited risk AI: these systems are subject to transparency and identification requirements and require synthetic content to be clearly labelled in a way that machines can recognise as being artificially created or altered. Providers are responsible for ensuring such labelling mechanisms are effective.

Minimal risk AI: these are general purpose AI systems, in respect of which the Act focuses on transparency and accountability. Providers need to make available technical documentation, summaries of training data and adhere to copyright and IP safeguards.

In terms of the healthcare sector, and with respect to high risk AI, there is a degree of overlap between the Act and the EU Medical Device Regulations 2017 (MDR) and the EU in Vitro Diagnostic Medical Devices Regulation (IVDR).

The Act requires a conformity assessment by a notified body to confirm that the relevant AI system meets the requirements of the Act, and such an assessment will form part of the conformity assessment procedure under the MDR and the IVDR.

The Act also provides that medical device notified bodies can carry out AI conformity assessments, so long as their AI competence has been assessed under MVD and IVDR.

However, there are additional requirements for AI systems that aren’t covered under the MVD and IVDR including:-

• Governance and data management testing and training requirements
• Transparent design requirements to allow deployers to interpret outputs
• Human oversight design obligations
• New record keeping requirements
• Accuracy requirements

One area where the Act does differ from the MDR and IVDR is in relation to the treatment of deployers of AI systems. While the MDR and IVDR impose responsibilities on suppliers within the supply chain, the Act additionally imposes obligations on the end users who operate the systems concerned (e.g. in the case of healthcare, hospitals, and clinicians). Those obligations include:-

• Establishing technical and organisations measures to make sure AI systems are used in according with operating instructions
• Monitoring and surveillance obligations
• Keeping system logs while they are under the deployers control
• Assigning trained competent persons to maintain human oversight of the AI systems

Many AI systems will no doubt be utilised in MedTech and the life sciences sector which won’t be considered medical devices. For such systems which will be limited risk, or low risk systems, the Act imposes relatively lightweight obligations on providers.

There is also an exemption for AI systems specifically developed and put into service for the sole purpose of scientific research and development, although it is not clear as yet how wide the “scientific research” exemption will be interpreted and, in particular, whether it will extend beyond academic research to commercial research.

There is significant incentive for all those who might potentially fall within the ambit of the Act to ensure compliance. Fines imposed for use of prohibited AI applications can be up to 7 per cent of global turnover of €35 million.

For other breaches the fines can be up to 3 per cent of global turnover or €15m while the supply of incorrect information can give rise to fines of €7.5m or 1.5 per cent of global turnover.

The Act is subject to a phased implementation period followed by a phased transition period, before it becomes enforceable. For example, the obligations relating to high-risk AI systems, will only apply 36 months after the Act comes into force.

While that does give businesses time to prepare for the implementation, it is important for those who currently incorporate, or plan to incorporate, AI into their products or services to start considering the implications of the Act immediately if they have not already done so.

Existing products and services, and those in development, which may be destined for use in the EU, are going to have to meet the requirements of the Act in the not-too-distant future.