72,964 research outputs found

    The Responsibility Quantification (ResQu) Model of Human Interaction with Automation

    Full text link
    Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human casual responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility to outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. The current model is an initial step in the complex goal to create a comprehensive responsibility model, that will enable quantification of human causal responsibility. It assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. Despite these limitations, it can aid in the analysis of systems designs alternatives and policy decisions regarding human responsibility in intelligent systems and advanced automation

    Driver behaviour with adaptive cruise control

    Get PDF
    This paper reports on the evaluation of adaptive cruise control (ACC) from a psychological perspective. It was anticipated that ACC would have an effect upon the psychology of driving, i.e. make the driver feel like they have less control, reduce the level of trust in the vehicle, make drivers less situationally aware, but workload might be reduced and driving might be less stressful. Drivers were asked to drive in a driving simulator under manual and ACC conditions. Analysis of variance techniques were used to determine the effects of workload (i.e. amount of traffic) and feedback (i.e. degree of information from the ACC system) on the psychological variables measured (i.e. locus of control, trust, workload, stress, mental models and situation awareness). The results showed that: locus of control and trust were unaffected by ACC, whereas situation awareness, workload and stress were reduced by ACC. Ways of improving situation awareness could include cues to help the driver predict vehicle trajectory and identify conflicts

    Workload modeling using time windows and utilization in an air traffic control task

    Get PDF
    In this paper, we show how to assess human workload for continuous tasks and describe how operator performance is affected by variations in break-work intervals and by different utilizations. A study was conducted examining the effects of different break-work intervals and utilization as a factor in a mental workload model. We investigated the impact of operator performance on operational error while performing continuous event-driven air traffic control tasks with multiple aircraft. To this end we have developed a simple air traffic control (ATC) model aimed at distributing breaks to form different configurations with the same utilization. The presented approach extends prior concepts of workload and utilization, which are based on a simple average utilization, and considers the specific patterns of break-work intervals. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Regulating Highly Automated Robot Ecologies: Insights from Three User Studies

    Full text link
    Highly automated robot ecologies (HARE), or societies of independent autonomous robots or agents, are rapidly becoming an important part of much of the world's critical infrastructure. As with human societies, regulation, wherein a governing body designs rules and processes for the society, plays an important role in ensuring that HARE meet societal objectives. However, to date, a careful study of interactions between a regulator and HARE is lacking. In this paper, we report on three user studies which give insights into how to design systems that allow people, acting as the regulatory authority, to effectively interact with HARE. As in the study of political systems in which governments regulate human societies, our studies analyze how interactions between HARE and regulators are impacted by regulatory power and individual (robot or agent) autonomy. Our results show that regulator power, decision support, and adaptive autonomy can each diminish the social welfare of HARE, and hint at how these seemingly desirable mechanisms can be designed so that they become part of successful HARE.Comment: 10 pages, 7 figures, to appear in the 5th International Conference on Human Agent Interaction (HAI-2017), Bielefeld, German
    • …
    corecore