727 research outputs found

    The Ironies of Automation Law: Tying Policy Knots with Fair Automation Practices Principles

    Get PDF
    Rapid developments in sensors, computing, and robotics, including power, kinetics, control, telecommunication, and artificial intelligence have presented opportunities to further integrate sophisticated automation across society. With these opportunities come questions about the ability of current laws and policies to protect important social values new technologies may threaten. As sophisticated automation moves beyond the cages of factories and cockpits, the need for a legal approach suitable to guide an increasingly automated future becomes more pressing. This Article analyzes examples of legal approaches to automation thus far by legislative, administrative, judicial, state, and international bodies. The case studies reveal an interesting irony: while automation regulation is intended to protect and promote human values, by focusing on the capabilities of the automation, this approach results in less protection of human values. The irony is similar to those pointed out by Lisanne Bainbridge in 1983, when she described how designing automation to improve the life of the operator using an automation-centered approach actually made the operator\u27s life worse and more difficult. The ironies that result from automation-centered legal approaches are a product of the neglect of the sociotechnical nature of automation: the relationships between man and machine are situated and interdependent, humans will always be in the loop, and reactive policies ignore the need for general guidance for ethical and accountable automation design and implementation. Like system engineers three decades ago, policymakers must adjust the focus of Meg Leta (Ambrose) Jones, J.D., Ph.D. is an Assistant Professor of Communication, legal treatment of automation to recognize the interdependence of man and machine to avoid the ironies of automation law and meet the goals of ethical integration. The Article proposes that the existing models utilized for safe and actual implementation for automated system design be supplemented with principles to guide ethical and sociotechnical legal approaches to automation

    Automotive automation: Investigating the impact on drivers' mental workload

    Get PDF
    Recent advances in technology have meant that an increasing number of vehicle driving tasks are becoming automated. Such automation poses new problems for the ergonomist. Of particular concern in this paper are the twofold effects of automation on mental workload - novel technologies could increase attentional demand and workload, alternatively one could argue that fewer driving tasks will lead to the problem of reduced attentional demand and driver underload. A brief review of previous research is presented, followed by an overview of current research taking place in the Southampton Driving Simulator. Early results suggest that automation does reduce workload, and that underload is indeed a problem, with a significant proportion of drivers unable to effectively reclaim control of the vehicle in an automation failure scenario. Ultimately, this research and a subsequent program of studies will be interpreted within the framework of a recently proposed theory of action, with a view to maximizing both theoretical and applied benefits of this domain

    Litigating Partial Autonomy

    Get PDF
    Who is responsible when a semi-autonomous vehicle crashes? Automobile manufacturers claim that because Advanced Driver Assistance Systems (ADAS) require constant human oversight even when autonomous features are active, the driver is always fully responsible when supervised autonomy fails. This Article argues that the automakers’ position is likely wrong both descriptively and normatively. On the descriptive side, current products liability law offers a pathway toward shared legal responsibility. Automakers, after all, have engaged in numerous marketing efforts to gain public trust in automation features. When drivers’ trust turns out to be misplaced, drivers are not always able to react in a timely fashion to re-take control of the car. In such cases, the automaker is likely to face primary liability, perhaps with a reduction for the driver’s comparative fault. On the normative side, this Article argues that the nature of modern semi-autonomous systems requires the human and machine to engage in a collaborative driving endeavor. The human driver should not bear full liability for the harm arising from this shared responsibility. As lawsuits involving partial autonomy increase, the legal system will face growing challenges in incentivizing safe product development, allocating liability in line with fair principles, and leaving room for a nascent technology to improve in ways that, over time, will add substantial safety protections. The Article develops a framework for considering how those policy goals can play a role in litigation involving autonomous features. It offers three key recommendations, including (1) that courts consider collaborative driving as a system when allocating liability; (2) that the legal system recognize and encourage regular software updates for vehicles, and (3) that customers pursue fraud and warranty claims when manufacturers overstate their autonomous capabilities. Claims for economic damages can encourage manufacturers to internalize the cost of product defects before, rather than after, their customers suffer serious physical injury

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human in the loop” systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into decision making process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: They raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human-in-the-loop” systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decisionmaking process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: they raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these “human-in-the-loop” systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify “the MABA-MABA trap,” which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decisionmaking process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: they raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system

    Humans in the Loop

    Get PDF
    From lethal drones to cancer diagnostics, humans are increasingly working with complex and artificially intelligent algorithms to make decisions which affect human lives, raising questions about how best to regulate these human-in-the-loop systems. We make four contributions to the discourse. First, contrary to the popular narrative, law is already profoundly and often problematically involved in governing human-in-the-loop systems: it regularly affects whether humans are retained in or removed from the loop. Second, we identify the MABA-MABA trap, which occurs when policymakers attempt to address concerns about algorithmic incapacities by inserting a human into a decision-making process. Regardless of whether the law governing these systems is old or new, inadvertent or intentional, it rarely accounts for the fact that human-machine systems are more than the sum of their parts: they raise their own problems and require their own distinct regulatory interventions. But how to regulate for success? Our third contribution is to highlight the panoply of roles humans might be expected to play, to assist regulators in understanding and choosing among the options. For our fourth contribution, we draw on legal case studies and synthesize lessons from human factors engineering to suggest regulatory alternatives to the MABA-MABA approach. Namely, rather than carelessly placing a human in the loop, policymakers should regulate the human-in-the-loop system

    INDUSTRY 4.0:SOCIAL CHALLENGES AND RISKS

    Get PDF
    Industry 4.0 is a term first introduced by the German government during the Hannover Messe fair in 2011 when it launched an initiative to support German industry in tackling future challenges. It refers to the 4th industrial revolution in which disruptive digital technologies, such as the Internet of Things (IoT), Internet of Everything (IoE), robotics, virtual reality (VR), and artificial intelligence (AI), are impacting industrial production.The new industrial paradigms of Industry 4.0 demand a socio-technical evolution of the human role in production systems, in which all working activities of the value chain will be performed with smart approaches.However, the automation of processes can have unpredictable effects.Nowadays, in a smart factory, the role of human operators is often only to control and supervise the automated processes. This new condition of workers brought forth a paradox: malfunctions or irregularities in the automated production process are rare but challenging.This article discusses the challenges and risks that the 4th industrial revolution is bringing to society.It introduces the concept of the Irony of Automation. This propounds that the more reliable an automated system, the less human operators have to do and, consequently, the less attention they pay to the system while it is operating.The authors go on to discuss the human-centered approach to automation, whose purpose is not necessarily to automate previously manual functions but, rather, to enhance user effectiveness and reduce errors.
    • …
    corecore