2,593 research outputs found

    Vehicle Teleoperation Interfaces

    Get PDF

    Assigned responsibility for remote robot operation

    Get PDF
    The remote control of robots, known as teleoperation, is a non-trivial task, requiring the operator to make decisions based on the information relayed by the robot about its own status as well as its surroundings. This places the operator under significant cognitive load. A solution to this involves sharing this load between the human operator and automated operators. This paper builds on the idea of adjustable autonomy, proposing Assigned Responsibility, a way of clearly delimiting control responsibility over one or more robots between human and automated operators. An architecture for implementing Assigned Responsibility is presented

    Diverse applications of advanced man-telerobot interfaces

    Get PDF
    Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces

    Sensor Augmented Virtual Reality Based Teleoperation Using Mixed Autonomy

    Get PDF
    A multimodal teleoperation interface is introduced, featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities onboard the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multimodal control interface. VR addresses the typical limitations of video based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an onboard computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and a real-state tracking system enable temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. The system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR based multimodal teleoperation interface is expected to be more adaptable and intuitive when compared with other interfaces

    RUR53: an Unmanned Ground Vehicle for Navigation, Recognition and Manipulation

    Full text link
    This paper proposes RUR53: an Unmanned Ground Vehicle able to autonomously navigate through, identify, and reach areas of interest; and there recognize, localize, and manipulate work tools to perform complex manipulation tasks. The proposed contribution includes a modular software architecture where each module solves specific sub-tasks and that can be easily enlarged to satisfy new requirements. Included indoor and outdoor tests demonstrate the capability of the proposed system to autonomously detect a target object (a panel) and precisely dock in front of it while avoiding obstacles. They show it can autonomously recognize and manipulate target work tools (i.e., wrenches and valve stems) to accomplish complex tasks (i.e., use a wrench to rotate a valve stem). A specific case study is described where the proposed modular architecture lets easy switch to a semi-teleoperated mode. The paper exhaustively describes description of both the hardware and software setup of RUR53, its performance when tests at the 2017 Mohamed Bin Zayed International Robotics Challenge, and the lessons we learned when participating at this competition, where we ranked third in the Gran Challenge in collaboration with the Czech Technical University in Prague, the University of Pennsylvania, and the University of Lincoln (UK).Comment: This article has been accepted for publication in Advanced Robotics, published by Taylor & Franci

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Teleoperating a mobile manipulator and a free-flying camera from a single haptic device

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe paper presents a novel teleoperation system that allows the simultaneous and continuous command of a ground mobile manipulator and a free flying camera, implemented using an UAV, from which the operator can monitor the task execution in real-time. The proposed decoupled position and orientation workspace mapping allows the teleoperation from a single haptic device with bounded workspace of a complex robot with unbounded workspace. When the operator is reaching the position and orientation boundaries of the haptic workspace, linear and angular velocity components are respectively added to the inputs of the mobile manipulator and the flying camera. A user study on a virtual environment has been conducted to evaluate the performance and the workload on the user before and after proper training. Analysis on the data shows that the system complexity is not an obstacle for an efficient performance. This is a first step towards the implementation of a teleoperation system with a real mobile manipulator and a low-cost quadrotor as the free-flying camera.Accepted versio

    Co-design of forward-control and force-feedback methods for teleoperation of an unmanned aerial vehicle

    Full text link
    The core hypothesis of this ongoing research project is that co-designing haptic-feedback and forward-control methods for shared-control teleoperation will enable the operator to more readily understand the shared-control algorithm, better enabling him or her to work collaboratively with the shared-control technology.} This paper presents a novel method that can be used to co-design forward control and force feedback in unmanned aerial vehicle (UAV) teleoperation. In our method, a potential field is developed to quickly calculate the UAV's risk of collision online. We also create a simple proxy to represent the operator's confidence, using the swiftness with which the operator sends commands the to UAV. We use these two factors to generate both a scale factor for a position-control scheme and the magnitude of the force feedback to the operator. Currently, this methodology is being implemented and refined in a 2D-simulated environment. In the future, we will evaluate our methods with user study experiments using a real UAV in a 3D environment.Accepted manuscrip
    • …
    corecore