5,187 research outputs found

    Sophisticated Robots : Balancing Liability, Regulation, and Innovation

    Get PDF
    Our lives are being transformed by large, mobile, sophisticated robots with increasingly higher levels of autonomy, intelligence, and interconnectivity among themselves. For example, driverless automobiles are likely to become commercially available within a decade. Many people who suffer physical injuries from these robots will seek legal redress for their injury, and regulatory schemes are likely to impose requirements on the field to reduce the number and severity of injuries. This Article addresses the issue of whether the current liability and regulatory systems provide a fair, efficient method for balancing the concern for physical safety against the need to incentivize the innovation that is necessary to develop these robots. This Article provides context for analysis by reviewing innovation and robots\u27 increasing size, mobility, autonomy, intelligence, and interconnections in terms of safety—particularly in terms of physical interaction with humans—and by summarizing the current legal framework for addressing personal injuries in terms of doctrine, application, and underlying policies. This Article argues that the legal system\u27s method of addressing physical injury from robotic machines that interact closely with humans provides an appropriate balance of innovation and liability for personal injury. It critiques claims that the system is flawed and needs fundamental change and concludes that the legal system will continue to fairly and efficiently foster the innovation of reasonably safe sophisticated robots

    Crashworthy Code

    Get PDF
    Code crashes. Yet for decades, software failures have escaped scrutiny for tort liability. Those halcyon days are numbered: self-driving cars, delivery drones, networked medical devices, and other cyber-physical systems have rekindled interest in understanding how tort law will apply when software errors lead to loss of life or limb. Even after all this time, however, no consensus has emerged. Many feel strongly that victims should not bear financial responsibility for decisions that are entirely automated, while others fear that cyber-physical manufacturers must be shielded from crushing legal costs if we want such companies to exist at all. Some insist the existing liability regime needs no modernist cure, and that the answer for all new technologies is patience. This Article observes that no consensus is imminent as long as liability is pegged to a standard of “crashproof” code. The added prospect of cyber-physical injury has not changed the underlying complexities of software development. Imposing damages based on failure to prevent code crashes will not improve software quality, but will impede the rollout of cyber-physical systems. This Article offers two lessons from the “crashworthy” doctrine, a novel tort theory pioneered in the late 1960s in response to a rising epidemic of automobile accidents, which held automakers accountable for unsafe designs that injured occupants during car crashes. The first is that tort liability can be metered on the basis of mitigation, not just prevention. When code crashes are statistically inevitable, cyber-physical manufacturers may be held to have a duty to provide for safer code crashes, rather than no code crashes at all. Second, the crashworthy framework teaches courts to segment their evaluation of code, and make narrower findings of liability based solely on whether cyber-physical manufacturers have incorporated adequate software fault tolerance into their designs. Requiring all code to be perfect is impossible, but expecting code to be crashworthy is reasonable

    Privacy in Pandemic: Law, Technology, and Public Health in the COVID-19 Crisis

    Get PDF
    The COVID-19 pandemic has caused millions of deaths and disastrous consequences around the world, with lasting repercussions for every field of law, including privacy and technology. The unique characteristics of this pandemic have precipitated an increase in use of new technologies, including remote communications platforms, healthcare robots, and medical Al. Public and private actors alike are using new technologies, like heat sensing, and technologically influenced programs, like contact tracing, leading to a rise in government and corporate surveillance in sectors like healthcare, employment, education, and commerce. Advocates have raised the alarm for privacy and civil liberties violations, but the emergency nature of the pandemic has drowned out many concerns. This Article is the first comprehensive account of privacy in pandemic that maps the terrain of privacy impacts related to technology and public health responses to the COVID-19 crisis. Many have written on the general need for better health privacy protections, education privacy protections, consumer privacy protections, and protections against government and corporate surveillance. However, this Article is the first comprehensive article to examine these problems of privacy and technology specifically in light of the pandemic, arguing that the lens of the pandemic exposes the need for both wide-scale and small-scale reform of privacy law. This Article approaches these problems with a focus on technical realities and social salience, and with a critical awareness of digital and political inequities, crafting normative recommendations with these concepts in mind. Understanding privacy in this time of pandemic is critical for law and policymaking in the near future and for the long-term goals of creating a future society that protects both civil liberties and public health. It is also important to create a contemporary scholarly understanding of privacy in pandemic at this moment in time, as a matter of historical record. By examining privacy in pandemic, in the midst of pandemic, this Article seeks to create a holistic scholarly foundation for future work on privacy, technology, public health, and legal responses to global crises

    Undergraduate Research Excellence Awards 2021

    Get PDF
    [No event held. This document was created from the webpage announcement.

    Safe Social Spaces

    Get PDF
    Technologies that mediate social interaction can put our privacy and our safety at risk. Harassment, intimate partner violence and surveillance, data insecurity, and revenge porn are just a few of the harms that bedevil technosocial spaces and their users, particularly users from marginalized communities. This Article seeks to identify the building blocks of safe social spaces, or environments in which individuals can share personal information at low risk of privacy threats. Relying on analogies to offline social spaces—Alcoholics Anonymous meetings, teams of coworkers, and attorney-client relationships—this Article argues that if a social space is defined as an environment characterized by disclosure, then a safe social space is one in which disclosure norms are counterbalanced by equally as powerful norms of trust that are both endogenously designed in and backed exogenously by law. Case studies of online social networks and social robots are used to show how both the design and law governing technosocial spaces today not only do not support trust, but actively undermine user safety by eroding trust and limiting the law’s regulatory power. The Article concludes with both design and law reform proposals to better build and protect trust and safe social spaces

    From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation

    Get PDF
    The increasing prominence of artificial intelligence (AI) systems in daily life and the evolving capacity of these systems to process data and act without human input raise important legal and ethical concerns. This article identifies three primary AI actors in the value chain (innovators, providers, and users) and three primary types of AI (automation, augmentation, and autonomy). It then considers responsibility in AI innovation from two perspectives: (i) strict liability claims arising out of the development, commercialization, and use of products with built-in AI capabilities (designated herein as “AI artifacts”); and (ii) an original research study on the ethical practices of developers and managers creating AI systems and AI artifacts. The ethical perspective is important because, at the moment, the law is poised to fall behind technological reality—if it hasn’t already. Consideration of the liability issues in tandem with ethical perspectives yields a more nuanced assessment of the likely consequences and adverse impacts of AI innovation. Companies should consider both legal and ethical strategies thinking about their own liability and ways to limit it, as well as policymakers considering AI regulation ex ante
    • …
    corecore