83 research outputs found

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    Teleoperating a mobile manipulator and a free-flying camera from a single haptic device

    Get PDF
    © 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe paper presents a novel teleoperation system that allows the simultaneous and continuous command of a ground mobile manipulator and a free flying camera, implemented using an UAV, from which the operator can monitor the task execution in real-time. The proposed decoupled position and orientation workspace mapping allows the teleoperation from a single haptic device with bounded workspace of a complex robot with unbounded workspace. When the operator is reaching the position and orientation boundaries of the haptic workspace, linear and angular velocity components are respectively added to the inputs of the mobile manipulator and the flying camera. A user study on a virtual environment has been conducted to evaluate the performance and the workload on the user before and after proper training. Analysis on the data shows that the system complexity is not an obstacle for an efficient performance. This is a first step towards the implementation of a teleoperation system with a real mobile manipulator and a low-cost quadrotor as the free-flying camera.Accepted versio

    Advanced virtual reality technologies for surveillance and security applications

    Get PDF
    We present a system that exploits advanced Virtual Reality technologies to create a surveillance and security system. Surveillance cameras are carried by a mini Blimp which is tele-operated using an innovative Virtual Reality interface with haptic feedback. An interactive control room (CAVE) receives multiple video streams from airborne and fixed cameras. Eye tracking technology allows for turning the user's gaze into the main interaction mechanism; the user in charge can examine, zoom and select specific views by looking at them. Video streams selected at the control room can be redirected to agents equipped with a PDA. On-field agents can examine the video sent by the control center and locate the actual position of the airborne cameras in a GPS-driven map. The PDA interface reacts to the user's gestures. A tilt sensor recognizes the position in which the PDA is held and adapts the interface accordingly. The prototype we present shows the added value of integrating VR technologies into a complex application and opens up several research directions in the areas of tele-operation, Multimodal Interfaces, etc. Copyright © 2006 by the Association for Computing Machinery, Inc

    The project PRISMA: Post-Disaster assessment with UAVs

    Get PDF
    In the context of emergency scenarios, Unmanned Aerial Vehicles (UAVs) are extremely important instruments, in particular during monitoring tasks and in relation to the Post-Disaster assessment phase. The current paper describes a summary of the work performed during PRISMA [1], a project focused on the development and deployment of robots and autonomous systems able to operate in emergency scenarios, with a specific reference to monitoring and real-time intervention. Among other aspects, the investigation of strategies for mapping and for path following, for the implementation of Human-Swarm Interfaces and for the coverage of large areas have been performed, and they will be here summarized

    Teleoperated visual inspection and surveillance with unmanned ground and aerial vehicles,” Int

    Get PDF
    Abstract—This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV

    Evaluation of teleoperation system performance over a cellular network

    Get PDF
    The ubiquity of cellular networks has exploded over the last half decade making internet access a given when located in an urban settings. On top of this, new technologies like 4G LTE provide higher transfer speeds than ever, permitting streaming of video and other high bandwidth services. Though cellular networks are not new, few studies have leveraged this particular communications method when studying teleoperations, due to the significant bandwidth restrictions. As a result, this study seeks to understand whether teleoperation could be implemented over regular cellular networks where the bandwidth load that each cell tower is subject to cannot be controlled by the teleoperation system. For this, a prototype system is built using a remote controlled golf cart that hosts a multimedia link between the vehicle and a control station which communicate over the internet. The system is tested by measuring teleoperation for 3 different tasks of varying degrees of complexity. The results reveal that latency can be low enough to optimally control a remote vehicle. Nevertheless, the performance greatly depends on the network conditions that can vary significantly. The results also indicated that in-situ driving outperformed remote operation.M.S

    Digital Cognitive Companions for Marine Vessels : On the Path Towards Autonomous Ships

    Get PDF
    As for the automotive industry, industry and academia are making extensive efforts to create autonomous ships. The solutions for this are very technology-intense. Many building blocks, often relying on AI technology, need to work together to create a complete system that is safe and reliable to use. Even when the ships are fully unmanned, humans are still foreseen to guide the ships when unknown situations arise. This will be done through teleoperation systems.In this thesis, methods are presented to enhance the capability of two building blocks that are important for autonomous ships; a positioning system, and a system for teleoperation.The positioning system has been constructed to not rely on the Global Positioning System (GPS), as this system can be jammed or spoofed. Instead, it uses Bayesian calculations to compare the bottom depth and magnetic field measurements with known sea charts and magnetic field maps, in order to estimate the position. State-of-the-art techniques for this method typically use high-resolution maps. The problem is that there are hardly any high-resolution terrain maps available in the world. Hence we present a method using standard sea-charts. We compensate for the lower accuracy by using other domains, such as magnetic field intensity and bearings to landmarks. Using data from a field trial, we showed that the fusion method using multiple domains was more robust than using only one domain. In the second building block, we first investigated how 3D and VR approaches could support the remote operation of unmanned ships with a data connection with low throughput, by comparing respective graphical user interfaces (GUI) with a Baseline GUI following the currently applied interfaces in such contexts. Our findings show that both the 3D and VR approaches outperform the traditional approach significantly. We found the 3D GUI and VR GUI users to be better at reacting to potentially dangerous situations than the Baseline GUI users, and they could keep track of the surroundings more accurately. Building from this, we conducted a teleoperation user study using real-world data from a field-trial in the archipelago, where the users should assist the positioning system with bearings to landmarks. The users experienced the tool to give a good overview, and despite the connection with the low throughput, they managed through the GUI to significantly improve the positioning accuracy

    Flexible Supervised Autonomy for Exploration in Subterranean Environments

    Full text link
    While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.Comment: Field Robotics special issue: DARPA Subterranean Challenge, Advancement and Lessons Learned from the Final

    Aerial Remote Sensing in Agriculture: A Practical Approach to Area Coverage and Path Planning for Fleets of Mini Aerial Robots

    Get PDF
    In this paper, a system that allows applying precision agriculture techniques is described. The application is based on the deployment of a team of unmanned aerial vehicles that are able to take georeferenced pictures in order to create a full map by applying mosaicking procedures for postprocessing. The main contribution of this work is practical experimentation with an integrated tool. Contributions in different fields are also reported. Among them is a new one-phase automatic task partitioning manager, which is based on negotiation among the aerial vehicles, considering their state and capabilities. Once the individual tasks are assigned, an optimal path planning algorithm is in charge of determining the best path for each vehicle to follow. Also, a robust flight control based on the use of a control law that improves the maneuverability of the quadrotors has been designed. A set of field tests was performed in order to analyze all the capabilities of the system, from task negotiations to final performance. These experiments also allowed testing control robustness under different weather conditions

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams
    corecore