3 research outputs found

    A Multidimensional RSSI Based Framework for Autonomous Relay Robots in Harsh Environments

    No full text
    Robotic tele-operation is essential for many dangerous applications, like inspection and manipulation in human hazardous environments. Also, the current state of the art in robotic tele-operation shows the necessity to increase distance between the operator and the robot, while maintaining safety of the operation. Nowadays, delicate manipulation in hazardous environments are mostly performed by robots that were designed for applications such as demining or military purposes, which provide the required level of safety, presenting, however, a series of technology issues, in terms for example of robot localization, cooperation, localization or multimodal human-robot interfaces. In fact, these commercial teleoperated robots normally present the necessity to establish a point-to-point communication between the robot and the base station, reducing its controlling area. This limitation is a difficulty, specially to perform interventions in tunnel environments, such as the one presented at CERN. In this paper a framework for the design of an autonomous relay robot is presented, which allows to have a series of moving stations, in order to extend the communication range between the robot and the operator. The robots are able to navigate safely and to move according to the measured signal strength, in order to maximize the signal throughput between the operator and the robot. The framework is based on different dynamic filtering techniques including Kalman based ones. This allows to predict the signal strength while moving and to react safely to unpredictable environmental changes that might highly affect the signal coverage. The proposed framework has been firstly validated and then successfully deployed on different robotic platforms, while theoretically demonstrated in simulation. Preliminary test results, which have been implemented using the Wi-Fi communication layer, have been tested in the CERN facilities

    Enhanced Human–Robot Interface With Operator Physiological Parameters Monitoring and 3D Mixed Reality

    No full text
    Remote robotic interventions and maintenance tasks are frequently required in hazardous environments. Particularly, missions with a redundant mobile manipulator in the world’s most complex machine, the CERN Large Hadron Collider (LHC), are performed in a sensitive underground environment with radioactive or electromagnetic hazards, bringing further challenges in safety and reliability. The mission’s success depends on the robot’s hardware and software, and when the tasks become too unpredictable to execute autonomously, the operators need to make critical decisions. Still, in most current humanmachine systems, the state of the human is neglected. In this context, a novel 3D Mixed Reality (MR) human-robot interface with the Operator Monitoring System (OMS) was developed to advance safety and task efficiency with improved spatial awareness, advanced manipulator control, and collision avoidance. However, new techniques could increase the system’s sophistication and add to the operator’s workload and stress. Therefore, for operational validation, the 3D MR interface had to be compared with an operational 2D interface, which has been used in hundreds of interventions. With the 3D MR interface, the execution of precise approach tasks was faster, with no increased workload or physiological response. The new 3D MR techniques improved the teleoperation quality and safety while maintaining similar effects on the operator. The OMS worked jointly with the interface and performed well with operators with varied teleoperation backgrounds facing a stressful real telerobotic scenario in the LHC. The paper contributes to the methodology for human-centred interface evaluation incorporating the user’s physiological state: heart rate, respiration rate and skin electrodermal activity, and combines it with the NASA TLX assessment method, questionnaires, and task execution time. It provides novel approaches to operator state identification, the GUI-OMS software architecture, and the evaluation of the 3D MR techniques. The solutions can be practically applied in missioncritical applications, such as telesurgery, space robotics, uncrewed transport vehicles and semi-autonomous machiner

    Multimodal Multi-User Mixed Reality Human–Robot Interface for Remote Operations in Hazardous Environments

    No full text
    In hazardous environments, where conditions present risks for humans, the maintenance and interventions are often done with teleoperated remote systems or mobile robotic manipulators to avoid human exposure to dangers. The increasing need for safe and efficient teleoperation requires advanced environmental awareness and collision avoidance. The up-to-date screen-based 2D or 3D interfaces do not fully allow the operator to immerse in the controlled scenario. This problem can be addressed with the emerging Mixed Reality (MR) technologies with Head-Mounted Devices (HMDs) that offer stereoscopic immersion and interaction with virtual objects. Such human-robot interfaces have not yet been demonstrated in telerobotic interventions in particle physics accelerators. Moreover, the operations often require a few experts to collaborate, which increases the system complexity and requires sharing an Augmented Reality (AR) workspace. The multi-user mobile telerobotics in hazardous environments with shared control in the AR has not yet been approached in the state-of-the-art. In this work, the developed MR human-robot interface using the AR HMD is presented. The interface adapts to the constrained wireless networks in particle accelerator facilities and provides reliable high-precision interaction and specialized visualization. The multimodal operation uses hands, eyes and user motion tracking, and voice recognition for control, as well as offers video, 3D point cloud and audio feedback from the robot. Multiple experts can collaborate in the AR workspace locally or remotely, and share or monitor the robot’s control. Ten operators tested the interface in intervention scenarios in the European Organization for Nuclear Research (CERN) with complete network characterization and measurements to conclude if operational requirements were met and if the network architecture could support single and multi-user communication load. The interface system has proved to be operationally ready at the Technical Readiness Level (TRL) 8 and was validated through successful demonstration in single and multi-user missions. Some system limitations and further work areas were identified, such as optimizing the network architecture for multi-user scenarios or high-level interface actions applying automatic interaction strategies depending on network conditions
    corecore