国际医疗器械设计与制造技术展览会

Dedicated to design & manufacturing for medical device

September 25-27,2024 | SWEECC H1&H2

EN | 中文
   

Artificial Intelligence and Machine Learning-Based Medical Devices: A Products Liability Perspective

Artificial intelligence (AI), once little-known outside of academic circles and science fiction films, has become a household phrase. That trend will continue to expand as the public becomes more exposed to AI technology in everyday products, ranging from their cars and home appliances to wearable devices capable of tracking the metrics of their everyday routines. Perhaps no facet of AI has sparked observers’ imaginations more than machine learning (ML), which is precisely as it sounds: the ability of computer programs to “automatically improve with experience.” 1 Machine learning lies at the heart of the kind of independent and superhuman computer power most people dream of when they consider AI.

 

While the public’s imagination is free to run wild with the promises of ML—creating an appetite that will no doubt be met with an equal and opposite response from businesses around the world—traditional policy and law-making bodies will be left with the task of trying to adapt existing legal and regulatory frameworks to it. Therefore it bears consideration how existing products liability norms might apply to AI/ML-based products, if at all, and what sort of uncertainties may arise for product manufacturers, distributors, and sellers. No enterprise better illustrates the careful balance between the endless potential of AI against the unique risks of products liability concerns than the medical device industry. This article discusses the uses and unique benefits of AI in the medical device context, while also exploring the developing products liability risks.

 

Why Medical Devices?

The medical industry, and in particular the field of diagnostic devices, has become fertile territory for AI/ML product development. This is no doubt because of the overlap between the sort of data recognition and processing these diagnostic devices require and the similar abilities of AI/ML systems. Making an accurate medical diagnosis is an incredibly complex procedure that requires a doctor to synthesize many often-contradictory data alongside individual patients’ subjective complaints, frequently under significant time constraints. To that end, some studies have estimated that as many as 12 million diagnostic errors occur every year in the U.S. alone. 2 Another symptom of the difficulty of diagnosis is the massive overuse of diagnostic testing; for example, researchers estimate that more than 50 CT scans, 50 ultrasounds, 15 MRIs, and 10 PET scans (125 tests altogether) are performed annually per every 100 Medicare recipients above the age of 65—many of which are medically unnecessary. 3 It requires no leap of the imagination to devise how AI/ML-based medical devices, which promise significantly improved diagnostic outcomes while massively reducing cost, could soon become commonplace at the point of care. Current devices on the market include the IDx-Dr, an AI-based diagnostic tool approved by the FDA in 2018, which independently analyzes images of the retina for signs of diabetic retinopathy, 4 and Medtronic’s Sugar.IQ Diabetes Assistant software, which pairs with a user’s continuous glucose monitor to offer personalized real-time advice on how certain foods or activities may affect blood glucose levels. 5

 

ML systems operate by analyzing data and, in turn, observing how various patterns and outcomes derive from those data. 6 Thus, in theory, ML algorithms can make predictions and decisions with increasing accuracy; as the quality, scope and, in many cases, organization of inputted data improves, so too does the accuracy of the corresponding outputted determinations. Increasingly complex ML systems have given rise to “deep learning—a subset of ML in which data is passed through many layers of simple functions in order to accomplish a more advanced analysis. One common example is computer vision, which refers to a computer’s ability to process the many features of an image, from scale to color, size and shape, in order to determine the content of the image. 7

 

Toward a Working Regulatory Framework

In 2013, the International Medical Device Regulators Forum (IMDRF) chartered a working group to study this new class of emerging products, which it dubbed Software as a Medical Device (SaMD). 8 The IMDRF defines this category broadly to include all software (whether AI/ML-based or not) intended to be use for medical purposes that perform such purposes without being part of traditional hardware medical devices. 9 By 2014, the IMDRF’s working group published a proposed framework for evaluation the risk and regulation of SaMD, which FDA adopted in 2017.

 

Since then, FDA has had little trouble adapting its preexisting regulatory models to the majority of SaMD devices. Manufacturers simply classify their SaMD according to predetermined risk categories and direct products into one of FDA’s traditional regulatory pathways: 510(k) clearance, de novo review, or premarket approval.10 Yet problems arise with the distinction between SaMD that operates on “locked” algorithms, i.e., those that provide the same result each time the same input is applied and do not change with use, versus those that incorporate AI/ML technology to learn and adapt in real time to optimize performance.11 To date, FDA has only cleared SaMD that functions on locked algorithms.12

 

In April 2019, however, FDA published its first proposed regulatory framework for true continuously learning (“unlocked”) AI/ML SaMD, centering around a total product lifecycle approach. 13 The goal of this approach is to avoid requiring manufacturers to go through extensive approval processes with each successive iteration of AI/ML SaMD (essentially an impossible task given that AI/ML technology could change from one second to the next). Among other things, FDA proposes that manufacturers can submit a “predetermined change control plan” composed of SaMD Pre-Specifications (SPS) and an Algorithm Change Protocol (ACP), which set a finite region of potential adaptations in the SaMD and specify the criteria for how data is gathered and how the SaMD learns from that data over time. 14  So long as ongoing updates to the SaMD fall within the defined SPS and ACP ranges, additional 510(k) clearance will not be required. 15

 

Regardless of when and how FDA enacts these proposed guidelines, it is clear that applicants will face a new host of challenges in obtaining FDA approval for AI/ML SaMD. These include not only accurately defining SPS and ACP but providing sufficient transparency into the data used to train the operative algorithms. As is often the case with emerging technology, it will take years of trial and error for FDA and applicants alike to converge on a satisfactory approach. Moreover, until other standard-setting bodies, such as the American Society for Testing and Materials, weigh in with more robust guidance, manufacturers will be faced with the burden of having to formulate new solutions to meet FDA’s concerns.

 

Challenges in the Courtroom

In the meantime, there also remains significant uncertainty about how the most common products liability doctrines will apply to AI/ML SaMD. Products liability claims often sound in strict liability, meaning a plaintiff must not prove that a defendant was negligent, only that its product was defective or unreasonably dangerous. 16 Traditionally, however, strict liability claims only arise from “tangible” products. Thus, the threshold question is whether such software may even be susceptible to products liability exposure. Courts have struggled with this issue and are likely to split in determining whether software fits as a product. 17

 

In Rodgers v. Christie, for example, plaintiffs attempted to assert strict products liability claims against the designer of New Jersey’s Public Safety Assessment, a risk assessment algorithm used to aid courts in determining the threat level posed by releasing pre-trial detainees back into the general public. 18 The basis of their claim was the allegation that the algorithm had erred in assigning a low score to a convicted felon, who allegedly murdered the plaintiffs’ son just days after he was released from detention. 19 The Court of Appeals for the Third Circuit grappled with the application of products liability law to the algorithm, ultimately dismissing the plaintiffs’ claims because “information, guidance, ideas, and recommendations are not products under the Third Restatement, both as a definitional matter and because extending strict liability to the distribution of ideas would raise serious First Amendment concerns.” 20 Despite this conclusion, the Court elected not to adopt a rule categorically barring the application of strict products liability to AI-based software, perhaps setting the stage for future challenges to come. Simply put, courts around the country have already begun to grapple with this question with varying outcomes, and the injection of potentially paradigm-shifting technology into their analyses will surely only further murky the waters.

 

If AI/ML software is deemed to be a product for the purposes of products liability claims, then many additional issues arise regarding the application of existing legal doctrines to this unique technology. Because AI/ML products are highly iterative and fluid, they are also likely to present novel challenges for both plaintiffs and defendants in litigating the three most common classes of products liability claims: design defect, manufacturing defect, and failure to warn. 21 Design and manufacturing defect claims, for example, operate on the assumption that a product is fixed. In fact, such defects are evaluated from the time of sale or distribution of the products. 22 In other words, in a traditional products liability action, a viable defense may be established by proving that the product was safe at the time it was sold. This well-settled legal paradigm could be entirely disrupted if a product, like AI/ML SaMD, has the capacity to constantly evolve and redesign itself over time, including after it is sold. As such, we can expect to see a shift in the focus of defect claims towards the adaptations implemented by a piece of software over time, as opposed to preliminary design concepts. These theories of liability will also require an increased focus on the testing and data sets used to “train” AI/ML software in order to establish the existence of defects. And, while traditional medical device testing aims at the worst-case scenarios, these new, fluid products will force manufacturers to constantly reevaluate the most extreme possible outcomes, if even possible to predict, to ensure reliable testing.

 

The same can be said for the sufficiency of warnings: How can a product’s manufacturer adequately warn users of potential risks if the product controls its own function and, by extension, the risks it may present? In the context of medical devices, as discussed above, FDA has proposed that manufacturers provide a detailed and finite range of algorithm change protocols (ACPs). It is likely, therefore, that product warnings will dovetail with these ACPs to cover the limited range of possible adaptations and, by extension, foreseeable risks. Plaintiffs, for their part, may seek to expand the doctrine of “post-sale duty to warn” to argue that defendants must continually monitor their products and provide updated warnings to users even after the time of sale.

 

Conclusion

Technological innovation outpaces the law, and artificial intelligence/machine learning is no different. Regardless of how legal doctrines evolve with the introduction of AI/ML-based products, until firm legal and regulatory guidelines progress, one thing is certain: There will be significant disagreement about how products liability law is applied. So, while these products present a new and lucrative market for manufacturers, the drive to supply an ever-increasing market demand must be balanced with a fulsome design, testing, and monitoring process.

 

From:MDDI

X