118 research outputs found

    Health AI for Good Rather Than Evil? The Need for a New Regulatory Framework for AI-Based Medical Devices

    Get PDF
    Artificial intelligence (AI), especially its subset machine learning, has tremendous potential to improve health care. However, health AI also raises new regulatory challenges. In this Article, I argue that there is a need for a new regulatory framework for AI-based medical devices in the U.S. that ensures that such devices are reasonably safe and effective when placed on the market and will remain so throughout their life cycle. I advocate for U.S. Food and Drug Administration (FDA) and congressional actions. I focus on how the FDA could - with additional statutory authority - regulate AI-based medical devices. I show that the FDA incompletely regulates health AI-based products, which may jeopardize patient safety and undermine public trust. For example, the medical device definition is too narrow, and several risky health AI-based products are not subject to FDA regulation. Moreover, I show that most AI-based medical devices available on the U.S. market are 510(k)-cleared. However, the 510(k) pathway raises significant safety and effectiveness concerns. I thus propose a future regulatory framework for premarket review of medical devices, including AI-based ones. Further, I discuss two problems that are related to specific AI-based medical devices, namely opaque (“black-box”) algorithms and adaptive algorithms that can continuously learn, and I make suggestions on how to address them. Finally, I encourage the FDA to broaden its view and consider AI-based medical devices as systems, not just devices, and focus more on the environment in which they are deployed

    “Nutrition Facts Labels” for Artificial Intelligence/Machine Learning-Based Medical Devices—The Urgent Need for Labeling Standards

    Get PDF
    Artificial Intelligence (“AI”), particularly its subset Machine Learning (“ML”), is quickly entering medical practice. The U.S. Food and Drug Administration (“FDA”) has already cleared or approved more than 520 AI/ ML-based medical devices, and many more devices are in the research and development pipeline. AI/ML-based medical devices are not only used in clinics by health care providers but are also increasingly offered directly to consumers for use, such as apps and wearables. Despite their tremendous potential for improving health care, AI/ML-based medical devices also raise many regulatory issues. This Article focuses on one issue that has not received sustained attention in the legal or policy debate: labeling for AI/ML-based medical devices. Labeling is crucial to prevent harm to patients and consumers (e.g., by reducing the risk of bias) and ensure that users know how to properly use the device and assess its benefits, potential risks, and limitations. It can also support transparency to users and thus promote public trust in new digital health technologies. This Article is the first to identify and thoroughly analyze the unique challenges of labeling for AI/ML-based medical devices and provide solutions to address them. It establishes that there are currently no standards of labeling for AI/ML-based medical devices. This is of particular concern as some of these devices are prone to biases, are opaque (“black boxes”), and have the ability to continuously learn. This Article argues that labeling standards for AI/ML-based medical devices are urgently needed, as the current labeling requirements for medical devices and the FDA’s case-by-case approach for a few AI/ML-based medical devices are insufficient. In particular, it proposes what such standards could look like, including eleven key types of information that should be included on the label, ranging from indications for use and details on the data sets to model limitations, warnings and precautions, and privacy and security. In addition, this Article argues that “nutrition facts labels,” known from food products, are a promising label design for AI/MLbased medical devices. Such labels should also be “dynamic” (rather than static) for adaptive algorithms that can continuously learn. Although this Article focuses on AI/ML-based medical devices, it also has implications for AI/ ML-based products that are not subject to FDA regulation

    Digital Home Health During the COVID-19 Pandemic Challenges to Safety, Liability, and Informed Consent, and the Way to Move Forward

    Get PDF
    We argue that changing how postmarket studies are conducted and who evaluates them might mitigate some concerns over the agency’s increasing reliance upon RWE. Distributing the responsibility for designing, conducting, and assessing real world studies of medical devices and drugs beyond industry sponsors and the FDA is critical to producing – and acting upon – more clinically useful information. We explore how the DESI program provides a useful model for the governance of RWE today. We explain why the FDA’s Center for Devices and Radiological Health is the most promising site for a new DESI initiative inspired by the challenges of regulating drugs in the past.https://ideas.dickinsonlaw.psu.edu/book-contributions/1016/thumbnail.jp

    The need for a system view to regulate artificial intelligence/machine learning-based software as medical device

    Get PDF
    Artificial intelligence (AI) and Machine learning (ML) systems in medicine are poised to significantly improve health care, for example, by offering earlier diagnoses of diseases or recommending optimally individualized treatment plans. However, the emergence of AI/ML in medicine also creates challenges, which regulators must pay attention to. Which medical AI/ML-based products should be reviewed by regulators? What evidence should be required to permit marketing for AI/ML-based software as a medical device (SaMD)? How can we ensure the safety and effectiveness of AI/ML-based SaMD that may change over time as they are applied to new data? The U.S. Food and Drug Administration (FDA), for example, has recently proposed a discussion paper to address some of these issues. But it misses an important point: we argue that regulators like the FDA need to widen their scope from evaluating medical AI/ML-based products to assessing systems. This shift in perspective—from a product view to a system view—is central to maximizing the safety and efficacy of AI/ML in health care, but it also poses significant challenges for agencies like the FDA who are used to regulating products, not systems. We offer several suggestions for regulators to make this challenging but important transition

    German Pharmaceutical Pricing: Lessons for the United States

    Get PDF
    To control pharmaceutical spending and improve access, the United States could adopt strategies similar to those introduced in Germany by the 2011 German Pharmaceutical Market Reorganization Act. In Germany, manufacturers sell new drugs immediately upon receiving marketing approval. During the first year, the German Federal Joint Committee assesses new drugs to determine their added medical benefit. It assigns them a score indicating their added benefit. New drugs comparable to drugs in a reference price group are assigned to that group and receive the same reimbursement, unless they are therapeutically superior. The National Association of Statutory Health Insurance Funds then negotiates with manufacturers the maximum reimbursement starting the 13th month, consistent with the drug\u27s added benefit assessment and price caps in other European countries. In the absence of agreement, an arbitration board sets the price. Manufacturers accept the price resolution or exit the market. Thereafter, prices generally are not increased, even for inflation. US public and private insurers control prices in diverse ways, but typically obtain discounts by designating certain drugs as preferred and by restricting patient access or charging high copayment for nonpreferred drugs. This article draws 10 lessons for drug pricing reform in US federal programs and private insurance

    How Much Can Potential Jurors Tell Us About Liability for Medical Artificial Intelligence?

    Get PDF
    Artificial intelligence (AI) is rapidly entering medical practice, whether for risk prediction, diagnosis, or treatment recommendation. But a persistent question keeps arising: What happens when things go wrong? When patients are injured, and AI was involved, who will be liable and how? Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, hospitals that implement AI tools for physician use, and developers who create those tools in the first place. If physicians are shielded from liability (typically medical malpractice liability) when they use AI tools, even if patient injury results, they are more likely to rely on these tools, even if the AI recommendations are counterintuitive. On the other hand, if physicians face liability from deviating from standard practice, whether an AI recommends something different or not, the adoption of AI is likely to be slower, and counterintuitive rejections— even correct ones—are likely to be rejected. In this issue of The Journal of Nuclear Medicine, Tobia et al. offer an important empiric look at this question, which has significant implications as to whether and when AI will come into clinical use

    COVID-19 Antibody Testing as a Precondition for Employment: Ethical and Legal Considerations

    Get PDF
    Employers and governments are interested in the use of serological (antibody) testing to allow people to return to work before there is a vaccine for SARS-CoV-2. We articulate the preconditions needed for the implementation of antibody testing, including the role of the U.S. Food & Drug Administration
    • …
    corecore