12,017 research outputs found

    Effects of alarms on control of robot teams

    Get PDF
    Annunciator driven supervisory control (ADSC) is a widely used technique for directing human attention to control systems otherwise beyond their capabilities. ADSC requires associating abnormal parameter values with alarms in such a way that operator attention can be directed toward the involved subsystems or conditions. This is hard to achieve in multirobot control because it is difficult to distinguish abnormal conditions for states of a robot team. For largely independent tasks such as foraging, however, self-reflection can serve as a basis for alerting the operator to abnormalities of individual robots. While the search for targets remains unalarmed the resulting system approximates ADSC. The described experiment compares a control condition in which operators perform a multirobot urban search and rescue (USAR) task without alarms with ADSC (freely annunciated) and with a decision aid that limits operator workload by showing only the top alarm. No differences were found in area searched or victims found, however, operators in the freely annunciated condition were faster in detecting both the annunciated failures and victims entering their cameras' fields of view. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved

    Connectivity Differences between Human Operators of Swarms and Bandwidth Limitations

    Get PDF
    Human interaction with robot swarms (HSI) is a young field with very few user studies that explore operator behavior. All these studies assume perfect communication between the operator and the swarm. A key challenge in the use of swarm robotic systems in human supervised tasks is to understand human swarm interaction in the presence of limited communication bandwidth, which is a constraint arising in many practical scenarios. In this paper, we present results of human-subject experiments designed to study the effect of bandwidth limitations in human swarm interaction. We consider three levels of bandwidth availability in a swarm foraging task. The lowest bandwidth condition performs poorly, but the medium and high bandwidth condition both perform well. In the medium bandwidth condition, we display useful aggregated swarm information (like swarm centroid and spread) to compress the swarm state information. We also observe interesting operator behavior and adaptation of operators' swarm reaction

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    SARSCEST (human factors)

    Get PDF
    People interact with the processes and products of contemporary technology. Individuals are affected by these in various ways and individuals shape them. Such interactions come under the label 'human factors'. To expand the understanding of those to whom the term is relatively unfamiliar, its domain includes both an applied science and applications of knowledge. It means both research and development, with implications of research both for basic science and for development. It encompasses not only design and testing but also training and personnel requirements, even though some unwisely try to split these apart both by name and institutionally. The territory includes more than performance at work, though concentration on that aspect, epitomized in the derivation of the term ergonomics, has overshadowed human factors interest in interactions between technology and the home, health, safety, consumers, children and later life, the handicapped, sports and recreation education, and travel. Two aspects of technology considered most significant for work performance, systems and automation, and several approaches to these, are discussed

    Regulating Highly Automated Robot Ecologies: Insights from Three User Studies

    Full text link
    Highly automated robot ecologies (HARE), or societies of independent autonomous robots or agents, are rapidly becoming an important part of much of the world's critical infrastructure. As with human societies, regulation, wherein a governing body designs rules and processes for the society, plays an important role in ensuring that HARE meet societal objectives. However, to date, a careful study of interactions between a regulator and HARE is lacking. In this paper, we report on three user studies which give insights into how to design systems that allow people, acting as the regulatory authority, to effectively interact with HARE. As in the study of political systems in which governments regulate human societies, our studies analyze how interactions between HARE and regulators are impacted by regulatory power and individual (robot or agent) autonomy. Our results show that regulator power, decision support, and adaptive autonomy can each diminish the social welfare of HARE, and hint at how these seemingly desirable mechanisms can be designed so that they become part of successful HARE.Comment: 10 pages, 7 figures, to appear in the 5th International Conference on Human Agent Interaction (HAI-2017), Bielefeld, German

    Neglect Benevolence in Human-Swarm Interaction with Communication Latency

    Get PDF
    In practical applications of robot swarms with bio-inspired behaviors, a human operator will need to exert control over the swarm to fulfill the mission objectives. In many operational settings, human operators are remotely located and the communication environment is harsh. Hence, there exists some latency in information (or control command) transfer between the human and the swarm. In this paper, we conduct experiments of human-swarm interaction to investigate the effects of communication latency on the performance of a human-swarm system in a swarm foraging task. We develop and investigate the concept of neglect benevolence, where a human operator allows the swarm to evolve on its own and stabilize before giving new commands. Our experimental results indicate that operators exploited neglect benevolence in different ways to develop successful strategies in the foraging task. Furthermore, we show experimentally that the use of a predictive display can help mitigate the adverse effects of communication latency

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201
    • …
    corecore