1,668 research outputs found

    Privacy and Accountability in Black-Box Medicine

    Get PDF
    Black-box medicine—the use of big data and sophisticated machine learning techniques for health-care applications—could be the future of personalized medicine. Black-box medicine promises to make it easier to diagnose rare diseases and conditions, identify the most promising treatments, and allocate scarce resources among different patients. But to succeed, it must overcome two separate, but related, problems: patient privacy and algorithmic accountability. Privacy is a problem because researchers need access to huge amounts of patient health information to generate useful medical predictions. And accountability is a problem because black-box algorithms must be verified by outsiders to ensure they are accurate and unbiased, but this means giving outsiders access to this health information. This article examines the tension between the twin goals of privacy and accountability and develops a framework for balancing that tension. It proposes three pillars for an effective system of privacy-preserving accountability: substantive limitations on the collection, use, and disclosure of patient information; independent gatekeepers regulating information sharing between those developing and verifying black-box algorithms; and information-security requirements to prevent unintentional disclosures of patient information. The article examines and draws on a similar debate in the field of clinical trials, where disclosing information from past trials can lead to new treatments but also threatens patient privacy

    Nudging the FDA

    Get PDF
    [Excerpt] The FDA’s regulation of drugs is frequently the subject of policy debate, with arguments falling into two camps. On the one hand, a libertarian view of patients and the health care system holds high the value of consumer choice. Patients should get all the information and the drugs they want; the FDA should do what it can to enforce some basic standards but should otherwise get out of the way. On the other hand, a paternalist view values the FDA’s role as an expert agency standing between patients and a set of potentially dangerous drugs and potentially unscrupulous or at least insufficiently careful drug companies. We lay out here some of the ways the FDA regulates drugs, including some normally left out of the debate, and suggest a middle ground between libertarian and paternalistic approaches focused on correcting information asymmetry and aligning incentives.

    Manufacturing Barriers to Biologics Competition and Innovation

    Get PDF
    As finding breakthrough small-molecule drugs gets harder, drug companies are increasingly turning to “large molecule” biologics. Although biologics represent many of the most promising new therapies for previously intractable diseases, they are extremely expensive. Moreover, the pathway for generic-type competition set up by Congress in 2010 is unlikely to yield significant cost savings. In this Article, we provide a fresh diagnosis of, and prescription for, this major public policy problem. We argue that the key cause is pervasive trade secrecy in the complex area of biologics manufacturing. Under the current regime, this trade secrecy, combined with certain features of FDA regulation, not only creates high barriers to entry of indefinite duration but also undermines efforts to advance fundamental knowledge. In sharp contrast, offering incentives for information disclosure to originator manufacturers would leverage the existing interaction of trade secrecy and the regulatory state in a positive direction. Although trade secrecy, particularly in complex areas like biologics manufacturing, often involves tacit knowledge that is difficult to codify and thus transfer, in this case regulatory requirements that originator manufacturers submit manufacturing details have already codified the relevant tacit knowledge. Incentivizing disclosure of these regulatory submissions would not only spur competition but it would provide a rich source of information upon which additional research, including fundamental research into the science of manufacturing, could build. In addition to provide fresh diagnosis and prescription in the specific area of biologics, the Article contributes to more general scholarship on trade secrecy and tacit knowledge. Prior scholarship has neglected the extent to which regulation can turn tacit knowledge not only into codified knowledge but into precisely the type of codified knowledge that is most likely to be useful and accurate. The Article also draws a link to the literature on adaptive regulation, arguing that greater regulatory flexibility is necessary and that more fundamental knowledge should spur flexibility. A vastly shortened version of the central argument that manufacturing trade secrecy hampers biosimilar development was published at 348 Science 188 (2015), available online

    Grants

    Get PDF
    Innovation is a primary source of economic growth and is accordingly the target of substantial academic and government attention. Grants are a key tool in the government’s arsenal to promote innovation, but legal academic studies of that arsenal have given them short shrift. Although patents, prizes, and regulator-enforced exclusivity are each the subject of substantial literature, grants are typically addressed briefly, if at all. According to the conventional story, grants may be the only feasible tool to drive basic research, as opposed to applied research, but they are a blunt tool for that task. Three critiques of grants underlie this narrative: grants are allocated by government bureaucrats who lack much of the relevant information for optimal decision-making; grants are purely ex ante funding mechanisms and therefore lack accountability; and grants misallocate risk by saddling the government all the downside risk and giving the innovator all the upside. These critiques are largely wrong. Focusing on grants awarded by the National Institutes of Health (NIH), the largest public funder of biomedical research, this Article delves deeply into how grants actually work. It shows that—at least at the NIH—grants are awarded not by uninformed bureaucrats, but by panels of knowledgeable peer scientists with the benefit of extensive disclosures from applicants. It finds that grants provide accountability through repeated interactions over time. And it argues that the upside of grant-investments to the government is much greater than the lack of direct profits would suggest. Grants also have two marked comparative strengths as innovation levers: they can support innovation where social value exceeds appropriable market value, and they can directly support innovation enablers—the people, institutions, processes, and infrastructure that shape and generate innovation. Where markets undervalue some socially important innovations, like cures for diseases of the poor, grants can help. Grants can also enable innovation by supporting its inputs: young or exceptional scientists, new institutions, research networks, and large datasets. Taken as a whole, grants do not form a monolithic, blunt innovation lever; instead, they provide a varied and nuanced set of policy options. Innovation scholars and policymakers should recognize and develop the usefulness of grants in promoting major social goals

    Trust, Trustworthiness, and Misinformation Shared by the Government

    Get PDF
    Where does trusted information come from? In a world of misinformation, where everyone is skeptical of everything, at least we can rely on expert, authoritative government agencies like the Environmental Protection Agency, the Centers for Disease Control, the Patent Office, and the Food and Drug Administration, right? Right

    Maine Aquaculture, Atlantic Salmon, and Inertia: What is the Future for Maine\u27s Net Pen Salmon Industry?

    Get PDF
    The Environmental Protection Agency (EPA) has for years failed to create regulations that would govern discharges from aquaculture facilities under the Clean Water Act (CWA). As recent cases from Maine have shown, this failure caused salmon producing aquaculture companies to do very little to reduce the effluent they released directly into the Atlantic. Under the Clean Water Act, however, such polluting is prohibited. Furthermore, under the Endangered Species Act (ESA), additional regulations probably would be imposed on these companies to protect the endangered wild Atlantic salmon that inhabit the rivers and ocean near these facilities. Recent regulations proposed by EPA, however, are probably not stringent enough to meet the statutory requirements of either the CWA or the ESA. While the cleanliness of our waters and the diversity of species should be maintained at the least, these goals can hopefully be reconciled with the growth of an important part of the local and national economy

    Distributed Governance of Medical AI

    Get PDF
    Artificial intelligence (AI) promises to bring substantial benefits to medicine. In addition to pushing the frontiers of what is humanly possible, like predicting kidney failure or sepsis before any human can notice, it can democratize expertise beyond the circle of highly specialized practitioners, like letting generalists diagnose diabetic degeneration of the retina. But AI doesn’t always work, and it doesn’t always work for everyone, and it doesn’t always work in every context. AI is likely to behave differently in well-resourced hospitals where it is developed than in poorly resourced frontline health environments where it might well make the biggest difference for patient care. To make the situation even more complicated, AI is unlikely to go through the centralized review and validation process that other medical technologies undergo, like drugs and most medical devices. Even if it did go through those centralized processes, ensuring high-quality performance across a wide variety of settings, including poorly resourced settings, is especially challenging for such centralized mechanisms. What are policymakers to do? This short Essay argues that the diffusion of medical AI, with its many potential benefits, will require policy support for a process of distributed governance, where quality evaluation and oversight take place in the settings of application—but with policy assistance in developing capacities and making that oversight more straightforward to undertake. Getting governance right will not be easy (it never is), but ignoring the issue is likely to leave benefits on the table and patients at risk

    Artificial Intelligence in the Medical System: Four Roles for Potential Transformation

    Get PDF
    Artificial intelligence (AI) looks to transform the practice of medicine. As academics and policymakers alike turn to legal questions, a threshold issue involves what role AI will play in the larger medical system. This Article argues that AI can play at least four distinct roles in the medical system, each potentially transformative: pushing the frontiers of medical knowledge to increase the limits of medical performance, democratizing medical expertise by making specialist skills more available to non-specialists, automating drudgery within the medical system, and allocating scarce medical resources. Each role raises its own challenges, and an understanding of the four roles is necessary to identify and address major hurdles to the responsible development and deployment of medical AI

    Problematic Interactions between AI and Health Privacy

    Get PDF
    The interaction of artificial intelligence (“AI”) and health privacy is a two-way street. Both directions are problematic. This Article makes two main points. First, the advent of artificial intelligence weakens the legal protections for health privacy by rendering deidentification less reliable and by inferring health information from unprotected data sources. Second, the legal rules that protect health privacy nonetheless detrimentally impact the development of AI used in the health system by introducing multiple sources of bias: collection and sharing of data by a small set of entities, the process of data collection while following privacy rules, and the use of non-health data to infer health information. The result is an unfortunate anti- synergy: privacy protections are weak and illusory, but rules meant to protect privacy hinder other socially valuable goals. This state of affairs creates biases in health AI, privileges commercial research over academic research, and is ill-suited to either improve health care or protect patients’ privacy. The ongoing dysfunction calls for a new bargain between patients and the health system about the uses of patient data
    • …
    corecore