8,457 research outputs found

    Cooperative monocular-based SLAM for multi-UAV systems in GPS-denied environments

    Get PDF
    This work presents a cooperative monocular-based SLAM approach for multi-UAV systems that can operate in GPS-denied environments. The main contribution of the work is to show that, using visual information obtained from monocular cameras mounted onboard aerial vehicles flying in formation, the observability properties of the whole system are improved. This fact is especially notorious when compared with other related visual SLAM configurations. In order to improve the observability properties, some measurements of the relative distance between the UAVs are included in the system. These relative distances are also obtained from visual information. The proposed approach is theoretically validated by means of a nonlinear observability analysis. Furthermore, an extensive set of computer simulations is presented in order to validate the proposed approach. The numerical simulation results show that the proposed system is able to provide a good position and orientation estimation of the aerial vehicles flying in formation.Peer ReviewedPostprint (published version

    Aerial-Ground collaborative sensing: Third-Person view for teleoperation

    Full text link
    Rapid deployment and operation are key requirements in time critical application, such as Search and Rescue (SaR). Efficiently teleoperated ground robots can support first-responders in such situations. However, first-person view teleoperation is sub-optimal in difficult terrains, while a third-person perspective can drastically increase teleoperation performance. Here, we propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide third-person perspective to ground robots. While our approach is based on local visual servoing, it further leverages the global localization of several ground robots to seamlessly transfer between these ground robots in GPS-denied environments. Therewith one MAV can support multiple ground robots on a demand basis. Furthermore, our system enables different visual detection regimes, and enhanced operability, and return-home functionality. We evaluate our system in real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR

    LACI: Low-effort Automatic Calibration of Infrastructure Sensors

    Full text link
    Sensor calibration usually is a time consuming yet important task. While classical approaches are sensor-specific and often need calibration targets as well as a widely overlapping field of view (FOV), within this work, a cooperative intelligent vehicle is used as callibration target. The vehicleis detected in the sensor frame and then matched with the information received from the cooperative awareness messagessend by the coperative intelligent vehicle. The presented algorithm is fully automated as well as sensor-independent, relying only on a very common set of assumptions. Due to the direct registration on the world frame, no overlapping FOV is necessary. The algorithm is evaluated through experiment for four laserscanners as well as one pair of stereo cameras showing a repetition error within the measurement uncertainty of the sensors. A plausibility check rules out systematic errors that might not have been covered by evaluating the repetition error.Comment: 6 pages, published at ITSC 201

    Cooperative Virtual Sensor for Fault Detection and Identification in Multi-UAV Applications

    Get PDF
    This paper considers the problem of fault detection and identification (FDI) in applications carried out by a group of unmanned aerial vehicles (UAVs) with visual cameras. In many cases, the UAVs have cameras mounted onboard for other applications, and these cameras can be used as bearing-only sensors to estimate the relative orientation of another UAV. The idea is to exploit the redundant information provided by these sensors onboard each of the UAVs to increase safety and reliability, detecting faults on UAV internal sensors that cannot be detected by the UAVs themselves. Fault detection is based on the generation of residuals which compare the expected position of a UAV, considered as target, with the measurements taken by one or more UAVs acting as observers that are tracking the target UAV with their cameras. Depending on the available number of observers and the way they are used, a set of strategies and policies for fault detection are defined. When the target UAV is being visually tracked by two or more observers, it is possible to obtain an estimation of its 3D position that could replace damaged sensors. Accuracy and reliability of this vision-based cooperative virtual sensor (CVS) have been evaluated experimentally in a multivehicle indoor testbed with quadrotors, injecting faults on data to validate the proposed fault detection methods.Comisión Europea H2020 644271Comisión Europea FP7 288082Ministerio de Economia, Industria y Competitividad DPI2015-71524-RMinisterio de Economia, Industria y Competitividad DPI2014-5983-C2-1-RMinisterio de Educación, Cultura y Deporte FP

    Image-Aided Navigation Using Cooperative Binocular Stereopsis

    Get PDF
    This thesis proposes a novel method for cooperatively estimating the positions of two vehicles in a global reference frame based on synchronized image and inertial information. The proposed technique - cooperative binocular stereopsis - leverages the ability of one vehicle to reliably localize itself relative to the other vehicle using image data which enables motion estimation from tracking the three dimensional positions of common features. Unlike popular simultaneous localization and mapping (SLAM) techniques, the method proposed in this work does not require that the positions of features be carried forward in memory. Instead, the optimal vehicle motion over a single time interval is estimated from the positions of common features using a modified bundle adjustment algorithm and is used as a measurement in a delayed state extended Kalman filter (EKF). The developed system achieves improved motion estimation as compared to previous work and is a potential alternative to map-based SLAM algorithms

    Synthesis and Validation of Vision Based Spacecraft Navigation

    Get PDF

    Machine vision based teleoperation aid

    Get PDF
    When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid
    corecore