2 research outputs found

    The Responsibility Quantification (ResQu) Model of Human Interaction with Automation

    Full text link
    Intelligent systems and advanced automation are involved in information collection and evaluation, in decision-making and in the implementation of chosen actions. In such systems, human responsibility becomes equivocal. Understanding human casual responsibility is particularly important when intelligent autonomous systems can harm people, as with autonomous vehicles or, most notably, with autonomous weapon systems (AWS). Using Information Theory, we develop a responsibility quantification (ResQu) model of human involvement in intelligent automated systems and demonstrate its applications on decisions regarding AWS. The analysis reveals that human comparative responsibility to outcomes is often low, even when major functions are allocated to the human. Thus, broadly stated policies of keeping humans in the loop and having meaningful human control are misleading and cannot truly direct decisions on how to involve humans in intelligent systems and advanced automation. The current model is an initial step in the complex goal to create a comprehensive responsibility model, that will enable quantification of human causal responsibility. It assumes stationarity, full knowledge regarding the characteristic of the human and automation and ignores temporal aspects. Despite these limitations, it can aid in the analysis of systems designs alternatives and policy decisions regarding human responsibility in intelligent systems and advanced automation

    Objective and Subjective Responsibility of a Control-Room Worker

    Full text link
    When working with AI and advanced automation, human responsibility for outcomes becomes equivocal. We applied a newly developed responsibility quantification model (ResQu) to the real world setting of a control room in a dairy factory to calculate workers' objective responsibility in a common fault scenario. We compared the results to the subjective assessments made by different functions in the diary. The capabilities of the automation greatly exceeded those of the human, and the optimal operator should have fully complied with the indications of the automation. Thus, in this case, the operator had no unique contribution, and the objective causal human responsibility was zero. However, outside observers, such as managers, tended to assign much higher responsibility to the operator, in a manner that resembled aspects of the "fundamental attribution error". This, in turn, may lead to unjustifiably holding operators responsible for adverse outcomes in situations in which they rightly trusted the automation, and acted accordingly. We demonstrate the use of the ResQu model for the analysis of human causal responsibility in intelligent systems. The model can help calibrate exogenous subjective responsibility attributions, aid system design, and guide policy and legal decisions
    corecore