2,723 research outputs found

    Punishing Artificial Intelligence: Legal Fiction or Science Fiction

    Get PDF
    Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This Article explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime

    Evidence and Extrapolation: Mechanisms for Regulating Off-Label Uses of Drugs and Devices

    Get PDF
    A recurring, foundational issue for evidence-based regulation is deciding whether to extend governmental approval from an existing use with sufficient current evidence of safety and efficacy to a novel use for which such evidence is currently lacking. This extrapolation issue arises in the medicines context when an approved drug or device that is already being marketed is being considered (1) for new conditions (such as off-label diagnostic categories), (2) for new patients (such as new subpopulations), (3) for new dosages or durations, or (4) as the basis for approving a related drug or device (such as a generic or biosimilar drug). Although the logic of preapproval testing and the precautionary principle—first, do no harm—would counsel in favor of prohibiting extrapolation approvals until after traditional safety and efficacy evidence exists, such delays would unreasonably sacrifice beneficial uses. The harm of accessing unsafe products must be balanced against the harm of restricting access to effective products. In fact, the Food and Drug Administration\u27s (FDA\u27s) current regulations in many ways reject the precautionary principle because they largely permit individual physicians to prescribe medications for off-label uses before any testing tailored to those uses has been done. The FDA\u27s approach empowers physicians, but overshoots the mark by allowing enduring use of drugs and devices with insubstantial support of safety and efficacy. This Article instead proposes a more dynamic and evolving evidence-based regime that charts a course between the Scylla and Charybdis of the overly conservative precautionary principle on one hand, and the overly liberal FDA regime on the other. Our approach calls for improvements in reporting, testing, and enforcement regulations to provide a more layered and nuanced system of regulatory incentives. First, we propose a more thoroughgoing reporting of off-label use (via the disclosure of diagnostic codes and detailing data) in manufacturers\u27 annual reports to the FDA, in the adverse event reports to the FDA, in Medicare/Medicaid reimbursement requests, and, for a subset of FDA-designated drugs, in prescriptions themselves. Second, we would substantially expand the agency\u27s utilization of postmarket testing, and we provide a novel framework for evaluating the need for postmarket testing. Finally, our approach calls for a tiered labeling system that would allow regulators and courts to draw finer reimbursement and liability distinctions among various drug uses, and would provide the agency both the regulatory teeth and the flexibility it presently lacks. Together, these reforms would improve the role of the FDA in the informational marketplace underlying physicians\u27 prescribing decisions. This evolutionary extrapolation framework could also be applied to other contexts

    Syndromic and Point-of-Care Molecular Testing

    Get PDF
    This article is made available for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic

    Treating the Health Care Crisis: Complementary and Alternative Medicine for PPACA

    Get PDF
    The Patient Protection and Affordable Care Act (PPACA) intends to take American health care in a new direction by focusing on preventive medicine and wellness-based treatment. But, in doing so, it does not adequately take into account the potential contribution of complementary and alternative medicine (CAM). CAM is already used by a large and growing number of individuals in the United States, although to date there is limited scientific evidence to support the efficacy of most CAM treatments. This article proposes statutory reforms to PPACA to encourage CAM research and development (R&D), and the use of demonstrably effective CAM treatments. A hybrid system of limited intellectual property protection and government prizes based on regulatory approval may be the best option for incentivizing R&D on CAM, along with increased funding for research through the National Institutes of Health. PPACA should require health insurance plans to reimburse for evidence-based CAM and empower an existing government agency (NCCAM) to regulate CAM standards and to recommend evidence-based CAM services. Together these policy and funding mechanisms should help reduce U.S. healthcare costs and improve quality of life

    Prototyping of the ILC Baseline Positron Target

    Full text link
    The ILC positron system uses novel helical undulators to create a powerful photon beam from the main electron beam. This beam is passed through a titanium target to convert it into electron-positron pairs. The target is constructed as a 1 m diameter wheel spinning at 2000 RPM to smear the 1 ms ILC pulse train over 10 cm. A pulsed flux concentrating magnet is used to increase the positron capture efficiency. It is cooled to liquid nitrogen temperatures to maximize the flatness of the magnetic field over the 1 ms ILC pulse train. We report on prototyping effort on this system.Comment: 7 pages, 9 figures, Proceedings of the International Workshop on Future Linear Colliders, Granada Spain, 26-30 September 201

    What do we know about rail suicide incidents?: analysis of 257 fatalities on the rail network in Great Britain

    Get PDF
    There are over two hundred and fifty suicides on the railway in Great Britain (GB) each year. Descriptive statistics are compiled, producing national and international data. The industry know how many and, to a limited extent, where these fatalities occur. There is little in-depth analysis of events. Therefore, there are gaps in knowledge of these fatalities and this is a weakness when considering the best approaches to prevention. This paper reports on the analysis of data on 257 suicide events at or near to 51 stations on three rail routes in Great Britain over a 20 year period. The analysis uses data from the industry Safety Management System (SMIS) database and produces simple descriptive statistics on a range of variables, including comparisons across the three rail routes. Additional data from staff and route based documentation have been used to verify, supplement and interpret information in the database. Examples of patterns of immediate and precursor behaviours during incidents have been presented, illustrating the potential to explore both common and anomalous behaviours during events. The findings demonstrate the type of content that can be explored within the industry data and through use of other data that are available within the industry. Commentary is provided on the strengths and weaknesses of the data and how findings from the analysis can be used to improve future data collection and prevention of incidents

    Evaluation of the Accuracy of 5 Digital Intraoral Scanners: In Vitro Analysis Using 3-Dimensional Computerized Metrology

    Get PDF
    Purpose: Compare 5 dental intraoral scanners by evaluating their precision and trueness Materials: Mixed material master cast. 5 types of scanners, 5 scans per machine. Compare test scans with master scan using software. Statistical analysis determines precision and trueness. Scanners: iTero Element, iTero Element 2, 3Shape TRIOS, 3Shape TRIOS 3, Dentsply Sirona Primescan. Master scanner: GOM ATOS Core 135. Software: GOM Inspect. Results: Posterior Sextant: Primescan had best precision. Anterior Sextant: TRIOS 3 had best trueness and precision. CoCr Crown: Element and Element 2 had best trueness and precision. Full Arch (Telio CAD): TRIOS 3 had best trueness. Cross-Arch distance: Element 2 had best trueness. CoCr crown adjacent to Telio CAD: Primescan had best trueness, Element had best precision. PEEK scanbody adjacent Telio CAD: TRIOS 3 had best trueness. Conclusions: Tested scanners can be appropriate for clinical use. Although clinicians may focus on trueness, bigger differences may be in precision.Master of Scienc

    On the Computation of Secondary Electron Emission Models

    Get PDF
    Secondary electron emission is a critical contributor to the charge particle current balance in spacecraft charging. Spacecraft charging simulation codes use a parameterized expression for the secondary electron (SE) yield delta(Eo) as a function of the incident electron energy Eo. Simple three-step physics models of the electron penetration, transport, and emission from a solid are typically expressed in terms of the incident electron penetration depth at normal incidence R(Eo) and the mean free path of the SE lambda. In this paper, the authors recall classical models for the range R(Eo): a power law expression of the form b1Eo n1 and a more general empirical double power law R(Eo)=b1Eo n1+b2E o n2. In most models, the yield is the result of an integral along the path length of incident electrons. An improved fourth-order numerical method to compute this integral is presented and compared to the standard second-order method. A critical step in accurately characterizing a particular spacecraft material is the determination of the model parameters in terms of the measured electron yield data. The fitting procedures and range models are applied to several measured data sets to compare their effectiveness in modeling the function delta(Eo) over the full range of energy of incident particle

    A Randomized Controlled Trial of Tai Chi for Tension Headaches

    Get PDF
    This study examined whether a traditional low-impact mind–body exercise, Tai Chi, affects health-related quality-of-life (HRQOL) and headache impact in an adult population suffering from tension-type headaches. Forty-seven participants were randomly assigned to either a 15 week intervention program of Tai Chi instruction or a wait-list control group. HRQOL (SF-36v2) and headache status (HIT-6™) were obtained at baseline and at 5, 10 and 15 weeks post-baseline during the intervention period. Statistically significant (P < 0.05) improvements in favor of the intervention were present for the HIT score and the SF-36 pain, energy/fatigue, social functioning, emotional well-being and mental health summary scores. A 15 week intervention of Tai Chi practice was effective in reducing headache impact and also effective in improving perceptions of some aspects of physical and mental health
    • …
    corecore