4,010 research outputs found

    Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision

    Get PDF
    In order to improve usability and safety, modern unmanned aerial vehicles (UAVs) are equipped with sensors to monitor the environment, such as laser-scanners and cameras. One important aspect in this monitoring process is to detect obstacles in the flight path in order to avoid collisions. Since a large number of consumer UAVs suffer from tight weight and power constraints, our work focuses on obstacle avoidance based on a lightweight stereo camera setup. We use disparity maps, which are computed from the camera images, to locate obstacles and to automatically steer the UAV around them. For disparity map computation we optimize the well-known semi-global matching (SGM) approach for the deployment on an embedded FPGA. The disparity maps are then converted into simpler representations, the so called U-/V-Maps, which are used for obstacle detection. Obstacle avoidance is based on a reactive approach which finds the shortest path around the obstacles as soon as they have a critical distance to the UAV. One of the fundamental goals of our work was the reduction of development costs by closing the gap between application development and hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for porting our algorithms, which are written in C/C++, to the embedded FPGA. We evaluated our implementation of the disparity estimation on the KITTI Stereo 2015 benchmark. The integrity of the overall realtime reactive obstacle avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Scienc

    Autonomous Navigation for Mobile Robots: Machine Learning-based Techniques for Obstacle Avoidance

    Get PDF
    Department of System Design and Control EngineeringAutonomous navigation of unmanned aerial vehicles (UAVs) has posed several challenges due to the limitations regarding the number and size of sensors that can be attached to the mobile robots. Although sensors such as LIDARs that directly obtain distance information of the surrounding environment have proven to be effective for obstacle avoidance, the weight and cost of the sensor contribute to the restrictions on usage for UAVs as recent trends require smaller sizes of UAVs. One practical option is the utilization of monocular vision sensors which tend to be lightweight and have a relatively low cost, yet still the main drawback is that it is difficult to draw a certain rule from the sensor data. Conventional methods regarding visual navigation makes use of features within the image data or estimate the depth of the image using various techniques such as optical flow. These features and methodologies however still rely on human-based rules and features, meaning that robustness can become an issue. A more recent approach to vision-based obstacle avoidance exploits heuristic methods based on artificial intelligence such as deep learning technologies, which have shown state-of-the-art performance in fields such as image processing or voice recognition. These technologies are capable of automatically selecting important features for classification or prediction tasks, hence allowing superior performance. Such heuristic methods have proven to be more efficient as the rules and features that are drawn from the image are automatically determined, unlike conventional methods where the rules and features are explicitly determined by humans. In this thesis, we propose an imitation learning framework based on deep learning technologies that can be applied to the obstacle avoidance of UAVs, where the neural networks in this framework are trained upon the flight data obtained from human experts, extracting the necessary features and rules to carry out designated tasks. The system introduced in this thesis mainly consists of three parts: the data acquisition and preprocessing phase, the model training phase, and the model application phase. A CNN (Convolutional Neural Network), 3D-CNN, and a DNN (Deep Neural Network) will each be applied to the framework and tested with respect to the collision ratios to validate the obstacle avoidance performance.ope

    Pushbroom Stereo for High-Speed Navigation in Cluttered Environments

    Full text link
    We present a novel stereo vision algorithm that is capable of obstacle detection on a mobile-CPU processor at 120 frames per second. Our system performs a subset of standard block-matching stereo processing, searching only for obstacles at a single depth. By using an onboard IMU and state-estimator, we can recover the position of obstacles at all other depths, building and updating a full depth-map at framerate. Here, we describe both the algorithm and our implementation on a high-speed, small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system requires no external sensing or computation and is, to the best of our knowledge, the first high-framerate stereo detection system running onboard a small UAV

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    An Innovative Mission Management System for Fixed-Wing UAVs

    Get PDF
    This paper presents two innovative units linked together to build the main frame of a UAV Mis- sion Management System. The first unit is a Path Planner for small UAVs able to generate optimal paths in a tridimensional environment, generat- ing flyable and safe paths with the lowest com- putational effort. The second unit is the Flight Management System based on Nonlinear Model Predictive Control, that tracks the reference path and exploits a spherical camera model to avoid unpredicted obstacles along the path. The control system solves on-line (i.e. at each sampling time) a finite horizon (state horizon) open loop optimal control problem with a Genetic Algorithm. This algorithm finds the command sequence that min- imizes the tracking error with respect to the ref- erence path, driving the aircraft far from sensed obstacles and towards the desired trajectory

    Decentralized 3D Collision Avoidance for Multiple UAVs in Outdoor Environments

    Get PDF
    The use of multiple aerial vehicles for autonomous missions is turning into commonplace. In many of these applications, the Unmanned Aerial Vehicles (UAVs) have to cooperate and navigate in a shared airspace, becoming 3D collision avoidance a relevant issue. Outdoor scenarios impose additional challenges: (i) accurate positioning systems are costly; (ii) communication can be unreliable or delayed; and (iii) external conditions like wind gusts affect UAVs’ maneuverability. In this paper, we present 3D-SWAP, a decentralized algorithm for 3D collision avoidance with multiple UAVs. 3D-SWAP operates reactively without high computational requirements and allows UAVs to integrate measurements from their local sensors with positions of other teammates within communication range. We tested 3D-SWAP with our team of custom-designed UAVs. First, we used a Software-In-The-Loop simulator for system integration and evaluation. Second, we run field experiments with up to three UAVs in an outdoor scenario with uncontrolled conditions (i.e., noisy positioning systems, wind gusts, etc). We report our results and our procedures for this field experimentation.European Union’s Horizon 2020 research and innovation programme No 731667 (MULTIDRONE

    Vehicle to Vehicle (V2V) Communication for Collision Avoidance for Multi-Copters Flying in UTM -TCL4

    Get PDF
    NASAs UAS Traffic management (UTM) research initiative is aimed at identifying requirements for safe autonomous operations of UAS operating in dense urban environments. For complete autonomous operations vehicle to vehicle (V2V) communications has been identified as an essential tool. In this paper we simulate a complete urban operations in an high fidelity simulation environment. We design a V2V communication protocol and all the vehicles participating communicate over this system. We show how V2V communication can be used for finding feasible, collision-free paths for multi agent systems. Different collision avoidance schemes are explored and an end to end simulation study shows the use of V2V communication for UTM TCL4 deployment
    corecore