2,947 research outputs found

    Airborne collision scenario flight tests: impact of angle measurement errors on reactive vision-based avoidance control

    Get PDF
    The future emergence of many types of airborne vehicles and unpiloted aircraft in the national airspace means collision avoidance is of primary concern in an uncooperative airspace environment. The ability to replicate a pilot’s see and avoid capability using cameras coupled with vision based avoidance control is an important part of an overall collision avoidance strategy. But unfortunately without range collision avoidance has no direct way to guarantee a level of safety. Collision scenario flight tests with two aircraft and a monocular camera threat detection and tracking system were used to study the accuracy of image-derived angle measurements. The effect of image-derived angle errors on reactive vision-based avoidance performance was then studied by simulation. The results show that whilst large angle measurement errors can significantly affect minimum ranging characteristics across a variety of initial conditions and closing speeds, the minimum range is always bounded and a collision never occurs

    Three-Dimensional, Vision-Based Proportional Navigation for UAV Collision Avoidance

    Get PDF
    As the number of potential applications for Unmanned Aerial Vehicles (UAVs) keeps rising steadily, the chances that these devices will operate in close proximity to static or dynamic obstacles also increases. Therefore, collision avoidance is an important challenge to overcome for Unmanned Aerial Vehicle operations. Electro-optical devices have several advantages such as light weight, low cost, low algorithm requirements with respect to computational power and possibly night vision capabilities. Therefore, vision-based Unmanned Aerial Vehicle collision avoidance has received considerable attention. Although much progress has been made in collision avoidance systems (CAS), most approaches are focused on two-dimensional environments. In order to operate in complex three-dimensional urban environments, three-dimensional collision avoidance systems are required. This thesis develops a three-dimensional vision-based collision avoidance system to provide sense and avoid capabilities for unmanned aerial vehicles (UAVs) operating in complex urban environments with multiple static and dynamic collision threats. This collision avoidance system is based on the principle of proportional navigation (Pro-Nav), which states that a collision will occur when the line-of-sight (LOS) angles to another object remain constant. According to this guidance law, monocular electro-optical devices can be implemented on Unmanned Aerial Vehicles, which can provide measurements of the line-of-sight angles, indicating potential collision threats. In this thesis, the guidance laws were applied to a nonlinear, six degree-of-freedom Unmanned Aerial Vehicles model in different two-dimensional or three dimensional simulation environments with a varying number of static and dynamic obstacles

    Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning

    Full text link
    Obstacle avoidance is a fundamental requirement for autonomous robots which operate in, and interact with, the real world. When perception is limited to monocular vision avoiding collision becomes significantly more challenging due to the lack of 3D information. Conventional path planners for obstacle avoidance require tuning a number of parameters and do not have the ability to directly benefit from large datasets and continuous use. In this paper, a dueling architecture based deep double-Q network (D3QN) is proposed for obstacle avoidance, using only monocular RGB vision. Based on the dueling and double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a simulator even with very noisy depth information predicted from RGB image. Extensive experiments show that D3QN enables twofold acceleration on learning compared with a normal deep Q network and the models trained solely in virtual environments can be directly transferred to real robots, generalizing well to various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in Robotic

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    A modified model for the Lobula Giant Movement Detector and its FPGA implementation

    Get PDF
    The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector
    • …
    corecore