2,861 research outputs found

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications

    Get PDF
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems

    Developing a Framework for Semi-Autonomous Control

    Get PDF

    Command and Control Systems for Search and Rescue Robots

    Get PDF
    The novel application of unmanned systems in the domain of humanitarian Search and Rescue (SAR) operations has created a need to develop specific multi-Robot Command and Control (RC2) systems. This societal application of robotics requires human-robot interfaces for controlling a large fleet of heterogeneous robots deployed in multiple domains of operation (ground, aerial and marine). This chapter provides an overview of the Command, Control and Intelligence (C2I) system developed within the scope of Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS). The life cycle of the system begins with a description of use cases and the deployment scenarios in collaboration with SAR teams as end-users. This is followed by an illustration of the system design and architecture, core technologies used in implementing the C2I, iterative integration phases with field deployments for evaluating and improving the system. The main subcomponents consist of a central Mission Planning and Coordination System (MPCS), field Robot Command and Control (RC2) subsystems with a portable force-feedback exoskeleton interface for robot arm tele-manipulation and field mobile devices. The distribution of these C2I subsystems with their communication links for unmanned SAR operations is described in detail. Field demonstrations of the C2I system with SAR personnel assisted by unmanned systems provide an outlook for implementing such systems into mainstream SAR operations in the future

    Cooperative and Multimodal Capabilities Enhancement in the CERNTAURO Human–Robot Interface for Hazardous and Underwater Scenarios

    Get PDF
    The use of remote robotic systems for inspection and maintenance in hazardous environments is a priority for all tasks potentially dangerous for humans. However, currently available robotic systems lack that level of usability which would allow inexperienced operators to accomplish complex tasks. Moreover, the task’s complexity increases drastically when a single operator is required to control multiple remote agents (for example, when picking up and transporting big objects). In this paper, a system allowing an operator to prepare and configure cooperative behaviours for multiple remote agents is presented. The system is part of a human–robot interface that was designed at CERN, the European Center for Nuclear Research, to perform remote interventions in its particle accelerator complex, as part of the CERNTAURO project. In this paper, the modalities of interaction with the remote robots are presented in detail. The multimodal user interface enables the user to activate assisted cooperative behaviours according to a mission plan. The multi-robot interface has been validated at CERN in its Large Hadron Collider (LHC) mockup using a team of two mobile robotic platforms, each one equipped with a robotic manipulator. Moreover, great similarities were identified between the CERNTAURO and the TWINBOT projects, which aim to create usable robotic systems for underwater manipulations. Therefore, the cooperative behaviours were validated within a multi-robot pipe transport scenario in a simulated underwater environment, experimenting more advanced vision techniques. The cooperative teleoperation can be coupled with additional assisted tools such as vision-based tracking and grasping determination of metallic objects, and communication protocols design. The results show that the cooperative behaviours enable a single user to face a robotic intervention with more than one robot in a safer way

    SORA Methodology for Multi-UAS Airframe Inspections in an Airport

    Get PDF
    Deploying Unmanned Aircraft Systems (UAS) in safety- and business-critical operations requires demonstrating compliance with applicable regulations and a comprehensive understanding of the residual risk associated with the UAS operation. To support these activities and enable the safe deployment of UAS into civil airspace, the European Union Aviation Safety Agency (EASA) has established a UAS regulatory framework that mandates the execution of safety risk assessment for UAS operations in order to gain authorization to carry out certain types of operations. Driven by this framework, the Joint Authorities for Rulemaking on Unmanned Systems (JARUS) released the Specific Operation Risk Assessment (SORA) methodology that guides the systematic risk assessment for UAS operations. However, existing work on SORA and its applications focuses mainly on single UAS operations, offering limited support for assuring operations conducted with multiple UAS and with autonomous features. Therefore, the work presented in this paper analyzes the application of SORA for a Multi-UAS airframe inspection (AFI) operation, that involves deploying multiple UAS with autonomous features inside an airport. We present the decision-making process of each SORA step and its application to a multiple UAS scenario. The results shows that the procedures and safety features included in the Multi-AFI operation such as workspace segmentation, the independent multi-UAS AFI crew proposed, and the mitigation actions provide confidence that the operation can be conducted safely and can receive a positive evaluation from the competent authorities. We also present our key findings from the application of SORA and discuss how it can be extended to better support multi-UAS operations.Unión Europea 10101725

    Autonomous Wristband Placement in a Moving Hand for Victims in Search and Rescue Scenarios With a Mobile Manipulator.

    Get PDF
    In this letter, we present an autonomous method for the placement of a sensorized wristband to victims in a Search-And-Rescue (SAR) scenario. For this purpose, an all-terrain mobile robot includes a mobile manipulator, which End-Effector (EE) is equipped with a detachable sensorized wristband. The wristband consists of two links with a shared shaft and a spring. This configuration allows the wristband to maintain fixed to the EE while moving and get placed around the victim’s forearm once the contact is produced. The method has two differentiated phases: i) The visual moving hand tracking phase, where a 3D vision system detects the victim’s hand pose. At the same time, the robotic manipulator tracks it with a Model Predictive Controller (MPC). ii) The haptic force-controlled phase, where the wristband gets placed around the victim’s forearm controlling the forces exerted. The wristband design is also discussed, considering the magnitude of the force needed for the attachment and the torque the wristband exerts to the forearm. Two experiments are carried out, one in the laboratory to evaluate the performance of the method and the second one in a SAR scenario, with the robotic manipulator integrated with the all-terrain mobile robot. Results show a 97.4% success in the wristband placement procedure and a good performance of the whole system in a large scale disaster exercisePlan Propio de la Universidad de Málaga, y Ministerio de Ciencia, Innovaci ón y Universidades, Gobierno de España, RTI2018-093421-B-I00. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
    corecore