259 research outputs found
The need for a system view to regulate artificial intelligence/machine learning-based software as medical device
Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition
Transcript: Presentation on Data Privacy Questions in the Digital Health World
The following is a transcription from The Digital Health and Technology Symposium presented at Cleveland-Marshall College of Law by The Journal of Law & Health on Friday, April 8, 2022. This transcript has been lightly edited for clarity
“Nutrition Facts Labels” for Artificial Intelligence/Machine Learning-Based Medical Devices—The Urgent Need for Labeling Standards
Artificial Intelligence (“AI”), particularly its subset Machine Learning (“ML”), is quickly entering medical practice. The U.S. Food and Drug Administration (“FDA”) has already cleared or approved more than 520 AI/ ML-based medical devices, and many more devices are in the research and development pipeline. AI/ML-based medical devices are not only used in clinics by health care providers but are also increasingly offered directly to consumers for use, such as apps and wearables. Despite their tremendous potential for improving health care, AI/ML-based medical devices also raise many regulatory issues.
This Article focuses on one issue that has not received sustained attention in the legal or policy debate: labeling for AI/ML-based medical devices. Labeling is crucial to prevent harm to patients and consumers (e.g., by reducing the risk of bias) and ensure that users know how to properly use the device and assess its benefits, potential risks, and limitations. It can also support transparency to users and thus promote public trust in new digital health technologies. This Article is the first to identify and thoroughly analyze the unique challenges of labeling for AI/ML-based medical devices and provide solutions to address them. It establishes that there are currently no standards of labeling for AI/ML-based medical devices. This is of particular concern as some of these devices are prone to biases, are opaque (“black boxes”), and have the ability to continuously learn. This Article argues that labeling standards for AI/ML-based medical devices are urgently needed, as the current labeling requirements for medical devices and the FDA’s case-by-case approach for a few AI/ML-based medical devices are insufficient. In particular, it proposes what such standards could look like, including eleven key types of information that should be included on the label, ranging from indications for use and details on the data sets to model limitations, warnings and precautions, and privacy and security. In addition, this Article argues that “nutrition facts labels,” known from food products, are a promising label design for AI/MLbased medical devices. Such labels should also be “dynamic” (rather than static) for adaptive algorithms that can continuously learn. Although this Article focuses on AI/ML-based medical devices, it also has implications for AI/ ML-based products that are not subject to FDA regulation
Health AI for Good Rather Than Evil? The Need for a New Regulatory Framework for AI-Based Medical Devices
Artificial intelligence (AI), especially its subset machine learning, has tremendous potential to improve health care. However, health AI also raises new regulatory challenges. In this Article, I argue that there is a need for a new regulatory framework for AI-based medical devices in the U.S. that ensures that such devices are reasonably safe and effective when placed on the market and will remain so throughout their life cycle. I advocate for U.S. Food and Drug Administration (FDA) and congressional actions. I focus on how the FDA could - with additional statutory authority - regulate AI-based medical devices. I show that the FDA incompletely regulates health AI-based products, which may jeopardize patient safety and undermine public trust. For example, the medical device definition is too narrow, and several risky health AI-based products are not subject to FDA regulation. Moreover, I show that most AI-based medical devices available on the U.S. market are 510(k)-cleared. However, the 510(k) pathway raises significant safety and effectiveness concerns. I thus propose a future regulatory framework for premarket review of medical devices, including AI-based ones. Further, I discuss two problems that are related to specific AI-based medical devices, namely opaque (“black-box”) algorithms and adaptive algorithms that can continuously learn, and I make suggestions on how to address them. Finally, I encourage the FDA to broaden its view and consider AI-based medical devices as systems, not just devices, and focus more on the environment in which they are deployed
Digital Home Health During the COVID-19 Pandemic Challenges to Safety, Liability, and Informed Consent, and the Way to Move Forward
We argue that changing how postmarket studies are conducted and who evaluates them might mitigate some concerns over the agency’s increasing reliance upon RWE. Distributing the responsibility for designing, conducting, and assessing real world studies of medical devices and drugs beyond industry sponsors and the FDA is critical to producing – and acting upon – more clinically useful information. We explore how the DESI program provides a useful model for the governance of RWE today. We explain why the FDA’s Center for Devices and Radiological Health is the most promising site for a new DESI initiative inspired by the challenges of regulating drugs in the past.https://ideas.dickinsonlaw.psu.edu/book-contributions/1016/thumbnail.jp
A Comprehensive Labeling Framework for Artificial Intelligence (AI)/Machine Learning (ML)-Based Medical Devices: From AI Facts Labels to a Front-of-Package AI Labeling System — Lessons Learned from Food Labeling
Medical Artificial Intelligence (AI) is rapidly transforming healthcare. The U.S. Food and Drug Administration (FDA) has already authorized the marketing of over one thousand AI/Machine Learning (ML)-based medical devices, and many more products are in the development pipeline. However, despite this fast development, the regulatory framework for AI/ML-based medical devices could be improved. This Article focuses on the labeling for AI/ML-based medical devices, a crucial topic that needs to receive more attention in the legal literature and from regulators like the FDA. The current lack of labeling standards tailored explicitly to AI/ML-based medical devices is an obstacle to transparency in the use of such devices. It prevents users from receiving essential information about many AI/ML-based medical devices necessary for their safe use, such as details on their data sets. To ensure transparency and protect patients’ health, the FDA must develop labeling standards for AI/ML-based medical devices as quickly as possible.
This Article suggests a comprehensive labeling framework for AI/ML-based medical devices. It argues that valuable lessons can be learned from food labeling and applied in the context of AI/ML-based medical devices. In particular, it argues that there is not only a need for regulators to develop “nutrition facts labels,” called here “AI Facts labels” for AI/ML-based medical devices, but also a “front-of-package (FOP) nutrition labeling system,” called here “FOP AI labeling system.” The use of FOP AI labels as a complement to AI Facts labels can further users’ literacy by providing at-a-glance, easy-to-understand information about the AI/ML-based medical device and enable them to make better informed decisions about their use. This Article is the first to establish a connection between FOP nutrition labeling systems and their promise for AI/ML-based medical devices and make concrete suggestions on what such a system could look like. It also makes additional concrete proposals on other aspects of labeling for AI/ML-based medical devices, including the development of an innovative, user-friendly app based on the FOP AI labeling system as well as labeling requirements for AI/ML-generated content
Privacy Shield 2.0—A New Trans-Atlantic Data Privacy Framework Between the European Union and the United States
This Article is the first to thoroughly examine the new adequacy decision for the Trans-Atlantic Data Privacy Framework (also known as “Privacy Shield 2.0”), including the relevant events and milestones ultimately leading to its adoption. The European Commission adopted the new Privacy Shield on July 10, 2023, to restore transatlantic data flows and commercial exchanges between the European Union and the United States. This Article first explores the holdings of the Court of Justice of the European Union in the groundbreaking cases Schrems I and Schrems II and elaborates on the reasons for the invalidation of the Safe Harbor Decision and the Privacy Shield Decision, respectively. It then examines the practical implications of the invalidation of the Privacy Shield Decision in Schrems II, including the recent decision of the Irish Data Protection Commissioner regarding Meta Platforms Ireland Limited (formerly Facebook Ireland Limited). This Article subsequently discusses the efforts of the United States government and the European Commission toward the adoption of Privacy Shield 2.0. It analyzes recent events, from the announcement of a new Trans-Atlantic Data Privacy Framework to the release of the Executive Order on Enhancing Safeguards for United States Signals Intelligence Activities to the European Commission’s draft adequacy decision, the launch of its adoption process, and ultimately its adoption.
This Article argues that despite the excitement of a new Trans-Atlantic Data Privacy Framework, it is improbable that the validity of Privacy Shield 2.0 would be upheld by the Court of Justice of the European Union in a possible Schrems III case. Although Privacy Shield 2.0 is a considerable improvement compared to the previously invalidated Privacy Shield Decision, it is likely that the Court of Justice of the European Union would consider the newly introduced safeguards for United States signals intelligence activities insufficient to comply with the General Data Protection Regulation’s requirements, read in the light of the Charter of Fundamental Rights of the European Union. This Article demonstrates the shortcomings of Privacy Shield 2.0 concerning the principles of necessity and proportionality as well as the right to effective judicial protection. It also argues for a comprehensive U.S. federal privacy law that ensures adequate protection of personal data for all data subjects in the United States
Privacy Aspects of Direct-to-Consumer Artificial Intelligence/Machine Learning health apps
Direct-To-Consumer Artificial Intelligence/Machine Learning health apps (DTC AI/ML health apps) are increasingly being made available for download in app stores. However, such apps raise challenges, one of which is providing adequate protection of consumers\u27 privacy. This article analyzes the privacy aspects of DTC AI/ML health apps and suggests how consumers\u27 privacy could be better protected in the United States. In particular, it discusses the Health Insurance Portability and Accountability Act of 1996 (HIPAA), the Federal Trade Commission (FTC) Act, the FTC\u27s Health Breach Notification Rule, the California Consumer Privacy Act of 2018, the California Privacy Rights Act of 2020, the Virginia Consumer Data Protection Act, the Colorado Privacy Act, and the EU General Data Protection Regulation (2016/679 – GDPR). This article concludes that much more work is needed to adequately protect the privacy of consumers using DTC AI/ML health apps. For example, while the FTC\u27s recent actions to protect consumers using DTC AI/ML health apps are laudable, consumer literacy needs to be much more promoted. Even if HIPAA is not updated, a U.S. federal privacy law that offers a high level of data protection—similar to the EU GDPR—could close many of HIPAA\u27s loopholes and ensure that American consumers\u27 data collected via DTC AI/ML health apps are better protected
- …
