Europe: Digital Health and AI Regulation Is Catching Up

Maarten Meulenbelt, Tatjana Sachse, Josefine Sommer, Zina Chatzidimitriadou, and Eva von Mühlenen look at key trends in data sharing, structural embedding of digital tech, cybersecurity, and reimbursement for digital health tech, and examine the current overhaul of the UK system.

Digital health and artificial intelligence (AI) are fast becoming the main drivers transforming the life sciences industry, from biopharmaceutical research and development (R&D) to clinical trial recruitment, diagnostics, supply chain optimization, and pharmacovigilance. Here, we examine the top six trends.

Prepare for Obligatory Data Sharing (EHDS)

For several years, pharmaceutical companies have been using global data sharing and analytics platforms such as Vivli to share clinical data on a voluntary basis. In 2023, the EU will finalize the conditions for obligatory sharing of electronic health data under the proposed European Health Data Space (EHDS) Regulation. Even though the EHDS will take a few years to become applicable and operational, companies are already exploring its risks, in particular to protect intellectual property and trade secrets (e.g., the risk of reverse engineering of AI algorithms). But companies are also seeing an upside and are studying opportunities to obtain access to valuable data from the EHDS. In the first half of 2023, discussions between EU institutions and industry will focus on the EHDS user conditions, set out in a “data permit,” and the deferrals necessary to permit patent filings. Another hot topic in the legislative discussions will be whether EHDS data can be used by reimbursement authorities. Finally, the EU’s growing “digital protectionism” can affect where EHDS data can be stored and transferred.

Embed Digital Technologies in Contracts and Governance Structures

When the internet took off, EU regulators started a long process of adapting laws to the online world, and that process has now moved into an intense “second phase” with a slew of new proposals. Companies have had to restructure their networks of contracts with suppliers, customers, consumers, and insurers to adapt to contractual risks and liabilities. The result can be summarized crudely as “what applies online applies offline.” Now, for AI applications and products, another major adaptation is underway. When companies put a self-learning product on the market (with components sourced from suppliers), and customers and end users can all affect the output of the product, e.g., by using the model on their data, how are risks and liabilities going to be distributed? The EU is putting a heavy finger on the scales through the revised Product Liability Directive (PLD) and new AI Liability Directive (AILD). Both new directives seek to adapt existing civil liability rules for the digital age and extend existing rules on strict liability for harm caused by defective products. Companies will need to follow these directives as they go through the legislative process, review their contracts and general conditions affected by new obligations and liabilities, and prepare for enhancing their governance structures for risk signaling and mitigation. 

New Authorization Requirements: AI and Cybersecurity

Ongoing product development for the EU market will need to take account of the EU’s rapidly developing, world-first legal framework on AI. The key piece of future legislation, the AI Act, will classify AI applications depending on different “risk levels.” Like for medical devices, the EU has opted to “privatize” compliance assessment. High-risk AI products will need to be assessed by specialized bodies that will themselves need to be designated (e.g., notified bodies designated for the assessment of medical devices can seek additional designation for assessing AI components). Because there is already a capacity crunch for notified bodies assessing medical devices in the EU, companies will need to consider how they will secure AI assessment capacity.

The standards to be met will likely include risk management, bias avoidance, traceability, interpretability, and explainability of output. The development of standards is in full swing, and the new deadline for the final joint report by the EU standardization bodies is January 31, 2025. Companies will want to watch the standard-setting processes taking place in the EU and globally (see the Joint EU-US Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (AI Roadmap) by the EU-US Trade and Technology Council (TTC)). In the meantime, the applicable regulatory obligations on developers may be further blurred by the proposed postponement of the application of the provisions of the EU Medical Device Regulation to 2027 or 2028, which is expected to be announced in early 2023. As there is a close overlap between the medical device and AI frameworks, it will take some time before it is clear to what timelines developers will need to adhere. Given the obligations imposed on providers and users of AI systems alike, developers should be very clear on the parameters of use of these systems to avoid unintended liability.

As in the U.S., new product requirements will include cybersecurity. The EU Directive on measures for a high common level of cybersecurity across the Union (NIS 2 Directive), which has recently been published, is driven by the fact that many medical devices operate on legacy systems, making them vulnerable to cyberattacks, and by the increasing interconnection of medical devices through the internet of medical things (IoMT). Operators of essential services, including critical medicinal products and pharmaceutical companies, should start preparing to comply with several security regulations set out in the proposed directive.

Obtaining Reimbursement of Digital Health and AI technologies

Current reimbursement systems in the EU are geared toward established technologies and not set up to assess forward-looking or preventive benefits, for example, from improved screening and detection of diseases. Assessing real-world data is challenging, too. While developing their reimbursement strategies, companies will want to follow the rollout of the EU Regulation on Health Technology Assessment (HTA), which will include the development of EU-wide clinical and methodological criteria for the evaluation of digital medical devices for pricing and reimbursement decisions.

UK: a Fundamental Overhaul of the System

The UK’s National Health System is increasingly looking at R&D partnership models with digital health developers. The UK is developing new evidence standards frameworks for digital health and AI and exploring an early value assessment pathway for digital technologies to enable interim recommendations on reimbursement while further evidence is generated.

At the same time, the UK is revising its medical device legal regulatory framework, including software and AI as medical devices. During 2023, the government will work on a package of new legislation, expected to be published in 2024. Regulator capacity will remain a key concern. The postponement of the applicability of the new UKCA (UK Conformity Assessed) mark regime by 12 months signifies the lack of preparedness both from the industry’s and the regulator’s side. In terms of AI regulation, the UK has stated its intent to take a light approach, with sector-specific guidance where required to remain agile.

WHO: the Government Whisperer

Many companies underestimate the force of change brought about by discussions at the WHO, which has been strengthening its role in helping national regulators assess and regulate digital technologies. Most recently, the WHO launched its Global Strategy on Digital Health 2020-2025. This provides guidance to national regulators in their approach to virtual care, remote monitoring, AI, big data analytics, blockchain, smart wearables, and tools for (remote) data capture, exchange, and storage. Many countries use WHO recommendations as a blueprint for new laws. 

Tips

  • Companies need to continue or start assessing the risks and opportunities of obligatory data sharing under the European Health Data Space, both as a beneficiary and a supplier of data.
  • Companies need to assess their governance structures and their contracts with suppliers, customers, and consumers to accommodate new EU rules on product liability and AI. 
  • Companies with AI products need to prepare for additional assessments before products can be launched on the EU market, while the development of the actual product and technology standards is in full swing. The capacity of designated private bodies assessing AI risks for the EU market is likely to be tight.  
  • Companies need to assess whether they are well placed to benefit from the opportunities offered by the UK’s drive to create a system that is welcoming to new technologies and its light touch on AI. 
  • Companies need to closely watch the WHO as it draws up blueprints for national digital health regulations.
The views expressed in these articles are exclusively those of the authors and do not necessarily reflect those of Sidley Austin LLP and its partners. This article has been prepared for informational purposes only and does not constitute legal advice. This information is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this without seeking advice from professional advisers.
Sidley Austin Logo