There is no shortage of startups and established companies seeking to innovate and disrupt in the AI-enabled digital health space. From AI optimizing the functionality of electronic health records to using AI to identify patients at risk for major diseases, health tech innovators and disruptors have the potential to transform the patient experience. At a corporate level, AI has the potential to revolutionize the revenue cycle management of healthcare companies and the submission of claims process. On the patient side, AI is incredibly valuable for areas from genomic sequencing to the analysis of radiology scans. AI also has the potential to reduce physician exhaustion and error.
As a result of continued choppy credit markets, AI healthcare-focused investors have been pursuing alternative structures to a straight purchase. Healthcare platforms generally have a continued need to raise and refinance capital to meet operational obstacles. Over the past year, financial approaches increasingly involve private credit resulting in the investor effectively serving as a lender, for example, those involving shorter term loans and seeking debt transactions in preference to equity. Investors utilizing debt instead of equity are also increasingly requesting warrants from sellers/borrowers to ensure they can meaningfully participate in potential upside. The value associated with such warrants can be difficult to determine, and use of warrants therefore requires careful structuring with consideration of healthcare fraud and abuse laws.
Equity deals continue to close for the “best” companies, but investors must work hard to determine which companies qualify as the “best.” There are many new market entrants – including in healthcare – who will say that their business model is AI-based, but not all of them have the technology at the center of their business model or own the AI platform they are using.
Diligence of AI algorithms is also a complicated and in-depth process, and it can be difficult to verify seller assertions. As such, where the financial means are present, there is a greater willingness to invest in hardware-based AI technology, which is easier to diligence than software-based. Strong management teams often serve as a “proxy” here, as strong executives tend to choose to work at companies with stronger products/platforms.
In addition, many underwriters of rep and warranty insurers are still determining their path forward with regard to AI coverage. Some underwriters are even saying that they are not prepared to cover AI, although some larger players in the underwriting market have set up criteria in order to be able to cover transactions/platforms that involve components consisting of AI technology. As deal flow increases over the next year, underwriters will be keen to underwrite deals involving components of AI, and the criteria for doing so will be fleshed out in more instructive detail.
Potential investors and acquirers do recognize that AI-enabled digital health (both hardware- and software-related investments) is a market in which there is a strong first-mover advantage. Investors are going to have to balance that advantage against moving forward before there is clarity on how the AI healthcare space is likely to be regulated in the future.
In the U.S., it has become clear that innovation in AI-enabled digital health diagnostic and therapeutic technologies requires sophisticated planning at the earliest stages of development. Such planning should not focus merely on product development but also consider in tandem (1) FDA approval, clearance, or authorization strategy alongside (2) payor coverage and reimbursement strategy. The more complex digital health products currently being developed are products that include an AI component, including those with combination hardware/software. These are complex, in part, because many involve the transmission of real-time patient information – such as the transmission of range of motion information while a patient undertakes a virtual reality physical therapy exercise game wearing a hardware device – to the patient’s care provider through associated software.
Depending on the claims made by and the associated risks of these types of complex digital health products, in the U.S., they likely require authorization through the Food and Drug Administration (FDA) de novo pathway - or clearance through the 510(k) pathway if there is a predicate, as well as a coverage and reimbursement pathway. They can therefore be among the trickiest to bring successfully to market. The products that have the potential to most quickly accelerate through the U.S. payor landscape and optimize value for patients are those that take the time and resources to obtain FDA approval, authorization, or clearance and then seek coverage under an existing benefit category, such as a physician service, medical device, or durable medical equipment (DME).
In the medical device category, we have recently seen various digital diagnostics and digital therapeutics being approved, authorized, or cleared. A subset of these products has also received breakthrough designation status from FDA as medical devices, and a few have simultaneously pursued a coverage and reimbursement pathway. For example, earlier this year, Imvaria received FDA authorization for an AI-enabled diagnostic that analyzes computer tomography scans to identify patterns that indicate a patient may have idiopathic pulmonary fibrosis. Imvaria is unique among AI-enabled breakthrough technologies in that it has obtained both FDA authorization and Current Procedural Terminology billing codes from the American Medical Association, paving the way for Medicare and commercial payor reimbursement once the product is more widely available on the market.
For those technologies that involve both hardware and the usage of software such as a mobile app, the DME Medicare benefit category can be utilized to obtain payor coverage and reimbursement. Digital therapeutics recently gained a big win when a virtual reality (VR) digital therapeutics company, AppliedVR, received a billing code under the DME category from the U.S. Centers for Medicare and Medicaid Services (CMS). The new code, E1905, is described as “[v]irtual reality cognitive behavioral therapy device (cbt), including pre-programmed therapy software.” This code paves the way for widespread adoption by other government and commercial payors, who often look to CMS when setting their own policies for patient access. AppliedVR has reported that the Veterans Administration is an early adopter and that it is piloting coverage with a number of national and regional health plans.
Payor adoption has been slow to date but continues to grow, with the best coverage and reimbursement potential for those products with a harmonized FDA and CMS strategy. For example, for products approved, authorized, or cleared by the FDA as medical devices, health insurance provider Highmark has established a prescription digital therapeutics formulary. Under this formulary, Highmark provides coverage and reimbursement for “devices [that] are described as Software as a Medical Device (SaMD) by [the FDA]. Prescribed Digital Therapeutics are intended to be used as a part or whole of a treatment plan for appropriate health diagnoses, that fall within the scope of approved use of the digital health software.” Aetna created a narrower policy for prescription digital therapeutics at this time, covering only “FDA approved or cleared mobile apps for contraception based on fertility awareness (e.g., Natural Cycles) [as] medically necessary per federal preventive care mandates, when prescribed by a treating provider.” Another example of this is Massachusetts Medicaid, which now covers certain on demand cognitive behavioral therapy application. This can be downloaded to the patient’s smartphone without prior authorization or copayment, through a designated pharmacy.
To date, with few exceptions, digital health products have not involved unique billing codes. General billing codes that are not specific to a product often mean that reimbursement decisions are left up to the regional Medicare Administrator Contractors on a case-by-case basis, creating uncertainty for investors.
Widespread adoption of complex digital diagnostics and therapeutics would likely accelerate if Congress passed legislation to allow for a Medicare “digital health” benefit category. Relatedly, in March 2023, we saw a group of U.S. House Representatives introduce the Ensuring Patient Access to Critical Breakthrough Products Act of 2023. Currently under committee review, this bipartisan legislation will, if passed, require Medicare to cover devices designated as breakthrough devices by the FDA for four years following regulatory approval, authorization, or clearance, and to determine a permanent coverage policy for the device during the four-year period. The legislation would also assign payment codes within three months of device approval.
Outside of congressional action, we have recently seen CMS showing interest in establishing a coverage pathway for certain digital health and AI-enabled technologies. For example, CMS has been working with industry stakeholders to inform coverage process improvements, including its own development of an alternative coverage pathway to provide transitional coverage for emerging technologies (TCET). In June 2023, CMS issued a procedural notice on TCET in which it said that that manufacturers of FDA-designated breakthrough devices that fall within a Medicare benefit category (and are not already subject to an existing Medicare national coverage determination) might self-nominate to participate in the TCET pathway on a voluntary basis. From this basis, the CMS added, it “may conduct an early evidence review before FDA decides on marketing authorization for the device and discuss with the manufacturer the best available coverage pathways.”
For the time being, since digital health technologies still do not fit squarely into the current regulatory structures, innovators in this space will need to continue to strategize early on regarding product development and marketing to secure coverage and reimbursement. Innovators that harmonize their FDA strategy together with their coverage and reimbursement strategy will be the ones to watch.
New global standards for AI and digital health technologies are further boosting market confidence in digital health and AI-enabled companies, following the adoption of trust-building frameworks in the EU and U.S. In particular, the passage of the EU AI Act in March 2024 and the proliferation of AI assurance labs in the U.S. mark the beginning of a critical trend towards ensuring the long-term growth of digital health AI-enabled technology markets. Beyond operational validation, these verification and assessment practices will increasingly play a critical role in building and maintaining trust among consumers, stakeholders, and regulatory bodies.
The EU and U.S. frameworks currently are underpinned by common ethical and technical principles and requirements, creating an environment that is ripe for investment and co-development opportunities in life sciences AI companies. These trust-building schemes can further support market confidence for AI-enabled digital health technologies.
The EU AI Act relies on semi-private Notified Bodies to carry out conformity assessments of certain “high-risk” AI-enabled digital health technologies. AI systems identified as high-risk include AI technologies used in medical devices, which are subject to Notified Body conformity assessments. Under the EU AI Act, developers of technologies incorporating certain high-risk AI systems are required to contract with a Notified Body, which will in turn evaluates whether these AI systems conform with the applicable EU AI Act requirements. It will take some time for Notified Bodies to build up this expertise, but several regulators have already announced that they are dedicating resources towards doing so.
The aim of this conformity assessment is to create trust in AI-enabled digital health products placed on the EU market that may pose the greatest risk to users and patients in the event of bias or error. We expect government oversight of these categories of AI-enabled products to boost confidence and trust, allowing for their quicker adoption in standards of care.
For AI-enabled technologies that fall outside EU AI Act’s high-risk classification, several private organizations have recently emerged to offer services in the form of tools and certifications for AI systems in the EU. These services allow AI system developers – including those focused on digital health – to show that their systems meet high standards of transparency, reliability, and ethical deployment. Examples of private organizations specializing in AI-based digital health technologies include entities that certify compliance with the international standard on AI management systems ISO 42001, which specifies requirements for establishing, implementing, maintaining, and continually improving an AI management system within organizations.
In the U.S., the Food and Drug Administration (FDA) already has authorized more than 880 AI/ML-enabled medical devices, but the regulation of certain types of AI, such as generative AI, is more challenging.
Moreover, comprehensive U.S. legislation addressing AI-enabled technologies has yet to be passed. In the meantime, third parties around the U.S. have attempted to address some of the challenges of AI regulation. This private market trend may serve as the basis for future legislation. For example, in December 2023, members of the Coalition for Health AI (CHAI) published an article proposing a public-private partnership with the potential to support a nationwide network of AI assurance labs.
The CHAI article argued for the need to ensure that health AI is “fair, appropriate, valid, effective, and safe,” and said that assurance labs would have a key role to play in achieving these goals. It is envisaged by the CHAI article that assurance labs could apply standards and validation procedures “to produce reports on model performance that can be widely shared.” The CHAI article also suggested that, in the future, assurance labs could be on the front line of evaluating health AI models, reporting such evaluations through a publicly-available registry, promoting regulatory guidance for such evaluations, and monitoring the ongoing performance of AI models.
The CHAI article also noted that “[a]ssurance labs could serve as a shared resource” for the validation of health AI models in order to accelerate “the pace of development and innovation, responsible and safe AI deployment, and successful market adoption” of the health AI technologies that are evaluated.
In the future, AI assurance labs could provide an important process of credential-granting and additional resources and expertise relating to the review of code and algorithms that could help reduce the resources needed for FDA to review AI related applications. Speaking to the CHAI in March this year, Commissioner Robert Califf of the FDA expressed support for the development of an assurance lab network, as one part of a national strategy, stating “[a]s AI continues to advance, and as more data become available and algorithms more sophisticated, the FDA’s approach to continuing to enable AI innovation includes the development and execution of a strategy with multiple components focused on building out infrastructure, methods, and tooling to identify safety operating parameters, standards, best practices, risk-based frameworks, and operational tooling for AI lifecycle management, including safety monitoring and management. There are a number of ways we’re working to achieve this. They include consideration of the creation of an assurance lab network to enable AI lifecycle management and governance model.”
With conformity assessments, certifications, and AI assurance labs becoming consistently part of the conversation in both the E.U. and the U.S., we anticipate that they will provide valuable support for the adoption of health AI tools, and in turn, also accelerate innovation.
Generative AI (GenAI) has immense potential to transform drug discovery and development, by streamlining both scientific functions and administrative processes. The term GenAI includes technologies such as machine learning, neural networks, and natural language processing. GenAI is distinct in its ability to create new, previously unseen outputs, rather than simply to analyze existing data. These potential widespread use cases raise important regulatory and business considerations, including the need for companies to develop company-wide AI governance policies.
In drug and biologic discovery and development, GenAI can accelerate the identification of promising drug / biologic candidates, and aims to reduce the time and cost of pre-clinical tests. For example, GenAI is able to generate molecular structures that can be leveraged to simulate how drugs and biologics interact with various biological targets, providing insights into their mechanisms of action. This capability is important for identifying potential adverse effects early in the development process, which can increase the safety profile of the drugs and biologics. Similarly, companies are exploring the use of AI for post-market surveillance and adverse event reporting for drugs, biologics, and medical devices.
Over the past year, GenAI has continued to demonstrate its ability to optimize clinical trials. For example, AI-driven platforms have been used to refine patient recruitment by analyzing vast datasets to identify individuals who best match the specific criteria of a study. This optimization may also reduce the likelihood of participant dropout and improve the relevance of the clinical data collected. Additionally, AI has been instrumental in designing adaptive trial protocols that can dynamically adjust based on interim results, ensuring faster and more decisive outcomes.
In personalized medicine, AI has been used to tailor treatments by analyzing genetic information, thereby increasing the likelihood of successful treatments. It has also been deployed to support genomic research by deciphering complex genetic data, and facilitating advances in cell and gene therapies and genetic engineering. GenAI specifically contributes by synthesizing new gene editing tools from extensive biological data, significantly refining the accuracy and speed of targeting genetic disorders. This enables more precise modifications of DNA sequences associated with diseases, paving the way for more effective, customized therapies.
In addition, companies are increasingly exploring the use of GenAI in manufacturing. Potential use cases include helping to maintain compliance and quality standards by predicting equipment failures, optimizing production processes, and ensuring batch quality through data-driven insights. For example, AI systems can analyze historical and real-time operational data to predict when machines are likely to fail or deviate from standard parameters, allowing preemptive maintenance. These capabilities can help to ensure consistent batch quality and adherence to rigorous regulatory standards, significantly reducing the risk of costly production downtime and quality issues. However, these use cases raise questions for companies regarding the reliability and validation of the AI systems, the training and understanding among the personnel that will use these systems, and whether and what good manufacturing practice (GMP) records are created by AI that may need to be retained.
The use of AI also raises legal complexities around the data that is collected for and used by AI applications. In the coming year, data acquisition issues are likely to present a significant hurdle for life sciences companies seeking to deploy AI. The legal complexities around AI in life sciences often pertain to the types of data collected, such as clinical trial data, patient health records, and genetic information. These data types are highly sensitive and regulated under laws such as HIPAA in the U.S., and the GDPR in Europe. The acquisition, use, and retention of such data must comply with stringent privacy and security regulations, raising challenges in ensuring that AI applications adhere to these legal standards.
In the regulatory field, the European Medicines Agency (EMA) is currently paving the way for the creation of specific regulatory guidelines on the incorporation of AI throughout the drug and biologic lifecycle, including aspects of manufacturing. This guidance is part of a broader strategy to integrate AI into drug development and manufacturing in a manner that upholds the EU's rigorous regulatory standards. EMA is seeking public input on a draft reflection paper expected to be finalized in the second half of 2024. The draft emphasizes the necessity for AI model development and performance evaluations to be governed by quality risk management principles, while also indicating a forthcoming revision of existing GMPs. It also calls for adherence to the International Council for Harmonisation (ICH) standards. Additionally, the paper highlights the obligation of marketing authorization holders to ensure that their use of AI algorithms, models, and datasets complies with Good Practice (GxP) standards within the EU regulatory framework.
Similarly in the United States, the Food and Drug Administration (FDA) has sought feedback as it considers the use of AI in drug and biologic development and manufacturing, and how such uses may fit under its regulatory paradigm. See, e.g., FDA, Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products: Discussion Paper and Request for Feedback (May 2023), https://www.fda.gov/media/167973/download; FDA, Artificial Intelligence in Drug Manufacturing: Discussion Paper (Feb. 2023), https://www.fda.gov/media/165743/download. FDA has noted that it has already seen drug applications where AI has been used to support drug and biologic development and manufacturing processes, and only expects the number of such applications to increase. For its part, FDA’s Center for Devices and Radiological Health (CDRH) already has authorized more than 880 AI/ML-enabled medical devices, and issued a suite of guidance to guide sponsors in determining whether and how FDA is regulating a particular software application.
Furthermore, antitrust authorities – in particular in the EU, the UK, and the United States – are currently assessing the risks that AI may present and how to best regulate its use. Antitrust authorities evaluating potential market monopolization is a key risk. Additionally, antitrust authorities are concerned about the potential risk of collusion, if AI systems were to develop anti-competitive practices, such as price-fixing or market allocation, independently without explicit human direction.
As regulatory bodies are starting to adjust their policies and guidelines to accommodate the integration and oversight of AI, this will further cement AI’s role as a fundamental element in the field generally and for life sciences companies specifically. And as AI technologies continue to prove their value in improving efficiency, accuracy, and outcomes in drug, biologic, and medical device discovery, development, and manufacturing, they are becoming more than just innovative tools used by pioneering companies. Increasingly, AI is seen as essential for staying competitive in the life sciences sector, helping companies to process and analyze large datasets, accelerate development timelines, and create more effective and personalized therapies.
The commissioner of the FDA Robert Califf recently stated: “The more information [AI] has, either the better it gets, or the worse it gets.” Data is now one of the most valuable raw materials for developers of digital tools, particularly when combined with the vast promises of AI. Digital health companies need to consider a data strategy early on and make it a core part of concept development. Companies that are considering the use of AI need to ask two key questions:
It is not new or surprising that data are of particular importance to the digital health industry. These industries are data-driven. From patients and clinicians to post-market surveillance and marketing strategies, huge amounts of data are generated every second. Approximately one third of all data produced each day is health-related, but it has been reported that around 97% of the world’s healthcare data are going unused. What is new is the urgency of using data appropriately, given the rapid adoption of AI.
Increasingly, there are partnerships between medical device companies and hospitals to develop and train AI tools. For example Paige AI – an innovative company that received FDA authorization for an AI pathology application, “Paige Prostate” – partnered with Memorial Sloan Kettering to obtain access to a deep archive of millions of tissue slides. Companies that are able to partner with leading medical institutions will have an advantage in the development of AI tools because they will be able to access large numbers of representative data sets.
Moreover, in addition to considering external data sources and strategies, companies need to consider internal frameworks for data development. Complexities are generated by the sheer number of data flows and actors that are often involved in the development and deployment of a digital health product.
For example, a vendor engaged to build medical apps leveraging AI models may struggle to accurately identify all its data sources, in particular where it has not implemented appropriate data mapping processes. This, in turn, is likely to have a knock-on effect in terms of a healthcare company being able to attest to its data being of good quality, and as being representative of the relevant patient/disease population.
Managing health data flows and actors is therefore important for success. In the coming year, it will be essential for digital health companies to maximize their access to and use of health data in order to accelerate their development plans, and to take existing and new products to the next level. Doing so will require companies to develop a data governance strategy that allows them to both realize the value of their data and to safeguard them.
Legislators and regulators have also recognized the immense potential of health data, and are beginning to aid companies in obtaining access to specific health data. For example, in the EU, the upcoming creation of a European Health Data Space is intended to promote and facilitate the sharing and reuse of electronic health data, for purposes ranging from research and the innovation and training of algorithms to policymaking and regulatory activities. Under this new framework, companies will both have the right to request and the obligation to provide relevant electronic health data, under specific conditions.
Successful development of digital health products often requires the involvement of a multitude of stakeholders, including with external partners such as data holders, co-developers, or others involved in the digital health ecosystem. A significant consideration before entering into a partnership, e.g. for the development of a digital biomarker, will be the diligence on the partner’s data robustness, and whether it can be used for development, regulatory and pricing, and reimbursement purposes. Digital health companies seeking partnerships will therefore need to adopt a strong data governance strategy and be ready to share information with their partners to build trust and provide assurance. The scale of such a strategy and the form it takes will vary depending on aspects such as use cases, partnership readiness and risk profiles, among others. However, any company producing and utilizing health data, whether alone or in partnership, in the coming years will be expected to understand its own and its partner’s data landscape, identify its data objectives and priorities, and implement procedures that incentivize cooperation by increasing operational efficiency, driving innovation, and reducing or managing risk.