3,150 research outputs found

    Human Swarm Interaction: An Experimental Study of Two Types of Interaction with Foraging Swarms

    Get PDF
    In this paper we present the first study of human-swarm interaction comparing two fundamental types of interaction, coined intermittent and environmental. These types are exemplified by two control methods, selection and beacon control, made available to a human operator to control a foraging swarm of robots. Selection and beacon control differ with respect to their temporal and spatial influence on the swarm and enable an operator to generate different strategies from the basic behaviors of the swarm. Selection control requires an active selection of groups of robots while beacon control exerts an influence on nearby robots within a set range. Both control methods are implemented in a testbed in which operators solve an information foraging problem by utilizing a set of swarm behaviors. The robotic swarm has only local communication and sensing capabilities. The number of robots in the swarm range from 50 to 200. Operator performance for each control method is compared in a series of missions in different environments with no obstacles up to cluttered and structured obstacles. In addition, performance is compared to simple and advanced autonomous swarms. Thirty-two participants were recruited for participation in the study. Autonomous swarm algorithms were tested in repeated simulations. Our results showed that selection control scales better to larger swarms and generally outperforms beacon control. Operators utilized different swarm behaviors with different frequency across control methods, suggesting an adaptation to different strategies induced by choice of control method. Simple autonomous swarms outperformed human operators in open environments, but operators adapted better to complex environments with obstacles. Human controlled swarms fell short of task-specific benchmarks under all conditions. Our results reinforce the importance of understanding and choosing appropriate types of human-swarm interaction when designing swarm systems, in addition to choosing appropriate swarm behaviors

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Embodied Evolution in Collective Robotics: A Review

    Get PDF
    This paper provides an overview of evolutionary robotics techniques applied to on-line distributed evolution for robot collectives -- namely, embodied evolution. It provides a definition of embodied evolution as well as a thorough description of the underlying concepts and mechanisms. The paper also presents a comprehensive summary of research published in the field since its inception (1999-2017), providing various perspectives to identify the major trends. In particular, we identify a shift from considering embodied evolution as a parallel search method within small robot collectives (fewer than 10 robots) to embodied evolution as an on-line distributed learning method for designing collective behaviours in swarm-like collectives. The paper concludes with a discussion of applications and open questions, providing a milestone for past and an inspiration for future research.Comment: 23 pages, 1 figure, 1 tabl

    Escape from the factory of the robot monsters: agents of change

    Get PDF
    Purpose: The increasing use of robotics within modern factories and workplaces not only sees us becoming more dependent on this technology but it also introduces innovative ways by which humans interact with complex systems. As agent-based systems become more integrated into work environments, the traditional human team becomes more integrated with agent-based automation and, in some cases, autonomous behaviours. This paper discusses these interactions in terms of team composition and how a human-agent collective can share goals via the delegation of authority between human and agent team members. Design/methodology/approach: This paper highlights the increasing integration of robotics in everyday life and examines the nature of how new novel teams may be constructed with the use of intelligent systems and autonomous agents. Findings: Areas of human factors and human-computer interaction are used to discuss the benefits and limitations of human-agent teams. Research limitations/implications: There is little research in (human–robot) (H–R) teamwork, especially from a human factors perspective. Practical implications: Advancing the author’s understanding of the H–R team (and associated intelligent agent systems) will assist in the integration of such systems in everyday practices. Social implications: H–R teams hold a great deal of social and organisational issues that need further exploring. Only through understanding this context can advanced systems be fully realised. Originality/value: This paper is multidisciplinary, drawing on areas of psychology, computer science, robotics and human–computer Interaction. Specific attention is given to an emerging field of autonomous software agents that are growing in use. This paper discusses the uniqueness of the human-agent teaming that results when human and agent members share a common goal within a team

    Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    Get PDF
    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and operating environments of both the human and the robotic agent, then effective human-robot coordination cannot be achieved

    2012 Alabama Lunabotics Systems Engineering Paper

    Get PDF
    Excavation will hold a key role for future lunar missions. NASA has stated that "advances in lunar regolith mining have the potential to significantly contribute to our nation's space vision and NASA space exploration operations." [1]. The Lunabotics Mining Competition is an event hosted by NASA that is meant to encourage "the development of innovative lunar excavation concepts from universities which may result in clever ideas and solutions which could be applied to an actual lunar excavation device or payload." [2]. Teams entering the competition must "design and build a remote controlled or autonomous excavator, called a lunabot, that can collect and deposit a minimum of 10 kilograms of lunar simulant within 10 minutes." [2]. While excavation will play an important part in lunar missions, there will still be many other tasks that would benefit from robotic assistance. An excavator might not be as well suited for these tasks as other types of robots might be. For example a lightweight rover would do well with reconnaissance, and a mobile gripper arm would be fit for manipulation, while an excavator would be comparatively clumsy and slow in both cases. Even within the realm of excavation it would be beneficial to have different types of excavators for different tasks, as there are on Earth. The Alabama Lunabotics Team at the University of Alabama has made it their goal to not only design and build a robot that could compete in the Lunabotics Mining Competition, but would also be a multipurpose tool for future NASA missions. The 2010-2011 resulting robot was named the Modular Omnidirectional Lunar Excavator (MOLE). Using the Systems Engineering process and building off of two years of Lunabotics experience, the 20ll-2012 Alabama Lunabotics team (Team NASACAR) has improved the MOLE 1.0 design and optimized it for the 2012 Lunabotics Competition rules [I]. A CAD model of MOLE 2.0 can be seen below in Fig. 1
    • …
    corecore