28,798 research outputs found

    LIDAR obstacle warning and avoidance system for unmanned aerial vehicle sense-and-avoid

    Get PDF
    The demand for reliable obstacle warning and avoidance capabilities to ensure safe low-level flight operations has led to the development of various practical systems suitable for fixed and rotary wing aircraft. State-of-the-art Light Detection and Ranging (LIDAR) technology employing eye-safe laser sources, advanced electro-optics and mechanical beam-steering components delivers the highest angular resolution and accuracy performances in a wide range of operational conditions. LIDAR Obstacle Warning and Avoidance System (LOWAS) is thus becoming a mature technology with several potential applications to manned and unmanned aircraft. This paper addresses specifically its employment in Unmanned Aircraft Systems (UAS) Sense-and-Avoid (SAA). Small-to-medium size Unmanned Aerial Vehicles (UAVs) are particularly targeted since they are very frequently operated in proximity of the ground and the possibility of a collision is further aggravated by the very limited see-and-avoid capabilities of the remote pilot. After a brief description of the system architecture, mathematical models and algorithms for avoidance trajectory generation are provided. Key aspects of the Human Machine Interface and Interaction (HMI2) design for the UAS obstacle avoidance system are also addressed. Additionally, a comprehensive simulation case study of the avoidance trajectory generation algorithms is presented. It is concluded that LOWAS obstacle detection and trajectory optimisation algorithms can ensure a safe avoidance of all classes of obstacles (i.e., wire, extended and point objects) in a wide range of weather and geometric conditions, providing a pathway for possible integration of this technology into future UAS SAA architectures

    Collision avoidance for unmanned surface vehicles based on COLREGS

    Get PDF
    Unmanned surface vehicles (USVs) are becoming increasingly vital in a variety of maritime applications. The development of a real-time autonomous collision avoidance system is the pivotal issue in the study on USVs, in which the reliable collision risk detection and the adoption of a plausible collision avoidance maneuver play a key role. Existing studies on this subject seldom integrate the International Regulations for Preventing Collisions at Sea 1972 (COLREGS) guidelines. However, in order to ensure maritime safety, it is of fundamental importance that such a regulation should be obeyed at all times. In this paper, an approach of real-time collision avoidance has been presented with the compliance with the COLREGS rules been successfully integrated for USV. The approach has been designed in a way that through the judgment of the collision situation, the velocity and heading angle of the USV are changed to complete the collision avoidance of the obstacle. A strategy with reference obstacle is proposed to deal with the multiple moving obstacles situation. A number of simulations have been conducted in order to confirm the validity of the theoretic results obtained. The results show that the algorithms can sufficiently deal with complex traffic environments and that the generated practical path is suitable for USVs

    Real-time Motion Planning For Autonomous Car in Multiple Situations Under Simulated Urban Environment

    Get PDF
    Advanced autonomous cars have revolutionary meaning for the automobile industry. While more and more companies have already started to build their own autonomous cars, no one has yet brought a practical autonomous car into the market. One key problem of their cars is lacking a reliable active real-time motion planning system for the urban environment. A real-time motion planning system makes cars can safely and stably drive under the urban environment. The final goal for this project is to design and implement a reliable real-time motion planning system to reduce accident rates in autonomous cars instead of human drivers. The real-time motion planning system includes lane-keeping, obstacle avoidance, moving car avoidance, adaptive cruise control, and accident avoidance function. In the research, EGO vehicles will be built and equipped with an image processing unit, a LIDAR, and two ultrasonic sensors to detect the environment. These environment data make it possible to implement a full control program in the real-time motion planning system. The control program will be implemented and tested in a scaled-down EGO vehicle with a scaled-down urban environment. The project has been divided into three phases: build EGO vehicles, implement the control program of the real-time motion planning system, and improve the control program by testing under the scale-down urban environment. In the first phase, each EGO vehicle will be built by an EGO vehicle chassis kit, a Raspberry Pi, a LIDAR, two ultrasonic sensors, a battery, and a power board. In the second phase, the control program of the real-time motion planning system will be implemented under the lane-keeping program in Raspberry Pi. Python is the programming language that will be used to implement the program. Lane-keeping, obstacle avoidance, moving car avoidance, adaptive cruise control functions will be built in this control program. In the last phase, testing and improvement works will be finished. Reliability tests will be designed and fulfilled. The more data grab from tests, the more stability of the real-time motion planning system can be implemented. Finally, one reliable motion planning system will be built, which will be used in normal scale EGO vehicles to reduce accident rates significantly under the urban environment.No embargoAcademic Major: Electrical and Computer Engineerin

    Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision

    Get PDF
    In order to improve usability and safety, modern unmanned aerial vehicles (UAVs) are equipped with sensors to monitor the environment, such as laser-scanners and cameras. One important aspect in this monitoring process is to detect obstacles in the flight path in order to avoid collisions. Since a large number of consumer UAVs suffer from tight weight and power constraints, our work focuses on obstacle avoidance based on a lightweight stereo camera setup. We use disparity maps, which are computed from the camera images, to locate obstacles and to automatically steer the UAV around them. For disparity map computation we optimize the well-known semi-global matching (SGM) approach for the deployment on an embedded FPGA. The disparity maps are then converted into simpler representations, the so called U-/V-Maps, which are used for obstacle detection. Obstacle avoidance is based on a reactive approach which finds the shortest path around the obstacles as soon as they have a critical distance to the UAV. One of the fundamental goals of our work was the reduction of development costs by closing the gap between application development and hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for porting our algorithms, which are written in C/C++, to the embedded FPGA. We evaluated our implementation of the disparity estimation on the KITTI Stereo 2015 benchmark. The integrity of the overall realtime reactive obstacle avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Scienc

    Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search

    Full text link
    Target search with unmanned aerial vehicles (UAVs) is relevant problem to many scenarios, e.g., search and rescue (SaR). However, a key challenge is planning paths for maximal search efficiency given flight time constraints. To address this, we propose the Obstacle-aware Adaptive Informative Path Planning (OA-IPP) algorithm for target search in cluttered environments using UAVs. Our approach leverages a layered planning strategy using a Gaussian Process (GP)-based model of target occupancy to generate informative paths in continuous 3D space. Within this framework, we introduce an adaptive replanning scheme which allows us to trade off between information gain, field coverage, sensor performance, and collision avoidance for efficient target detection. Extensive simulations show that our OA-IPP method performs better than state-of-the-art planners, and we demonstrate its application in a realistic urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and Automation (ICRA-2019) to be held at Montreal, Canad

    Safety-related Tasks within the Set-Based Task-Priority Inverse Kinematics Framework

    Full text link
    In this paper we present a framework that allows the motion control of a robotic arm automatically handling different kinds of safety-related tasks. The developed controller is based on a Task-Priority Inverse Kinematics algorithm that allows the manipulator's motion while respecting constraints defined either in the joint or in the operational space in the form of equality-based or set-based tasks. This gives the possibility to define, among the others, tasks as joint-limits, obstacle avoidance or limiting the workspace in the operational space. Additionally, an algorithm for the real-time computation of the minimum distance between the manipulator and other objects in the environment using depth measurements has been implemented, effectively allowing obstacle avoidance tasks. Experiments with a Jaco2^2 manipulator, operating in an environment where an RGB-D sensor is used for the obstacles detection, show the effectiveness of the developed system

    J-MOD2^{2}: Joint Monocular Obstacle Detection and Depth Estimation

    Full text link
    In this work, we propose an end-to-end deep architecture that jointly learns to detect obstacles and estimate their depth for MAV flight applications. Most of the existing approaches either rely on Visual SLAM systems or on depth estimation models to build 3D maps and detect obstacles. However, for the task of avoiding obstacles this level of complexity is not required. Recent works have proposed multi task architectures to both perform scene understanding and depth estimation. We follow their track and propose a specific architecture to jointly estimate depth and obstacles, without the need to compute a global map, but maintaining compatibility with a global SLAM system if needed. The network architecture is devised to exploit the joint information of the obstacle detection task, that produces more reliable bounding boxes, with the depth estimation one, increasing the robustness of both to scenario changes. We call this architecture J-MOD2^{2}. We test the effectiveness of our approach with experiments on sequences with different appearance and focal lengths and compare it to SotA multi task methods that jointly perform semantic segmentation and depth estimation. In addition, we show the integration in a full system using a set of simulated navigation experiments where a MAV explores an unknown scenario and plans safe trajectories by using our detection model
    • …
    corecore