2,343 research outputs found

    Monocular Vision as a Range Sensor

    Get PDF
    One of the most important abilities for a mobile robot is detecting obstacles in order to avoid collisions. Building a map of these obstacles is the next logical step. Most robots to date have used sensors such as passive or active infrared, sonar or laser range finders to locate obstacles in their path. In contrast, this work uses a single colour camera as the only sensor, and consequently the robot must obtain range information from the camera images. We propose simple methods for determining the range to the nearest obstacle in any direction in the robot’s field of view, referred to as the Radial Obstacle Profile. The ROP can then be used to determine the amount of rotation between two successive images, which is important for constructing a 360º view of the surrounding environment as part of map construction

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Pushbroom Stereo for High-Speed Navigation in Cluttered Environments

    Full text link
    We present a novel stereo vision algorithm that is capable of obstacle detection on a mobile-CPU processor at 120 frames per second. Our system performs a subset of standard block-matching stereo processing, searching only for obstacles at a single depth. By using an onboard IMU and state-estimator, we can recover the position of obstacles at all other depths, building and updating a full depth-map at framerate. Here, we describe both the algorithm and our implementation on a high-speed, small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system requires no external sensing or computation and is, to the best of our knowledge, the first high-framerate stereo detection system running onboard a small UAV

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Sudden Obstacle Appearance Detection by Analyzing Flow Field Vector for Small-Sized UAV

    Get PDF
    Achieving a reliable obstacle detection and avoidance system that can provide an effective safe avoidance path for small unmanned aerial vehicle (UAV) is very challenging due to its physical size and weight constraints. Prior works tend to employ the vision based-sensor as the main detection sensor but resulting to high dependency on texture appearance while not having a distance sensing capabilities. The previous system only focused on the detection of the static frontal obstacle without observing the environment which may have moving obstacles. On the other hand, most of the wide spectrum range sensors are heavy and expensive hence not suitable for small UAV. In this work, integration of different based sensors was proposed for a small UAV in detecting unpredictable obstacle appearance situation. The detection of the obstacle is accomplished by analysing the flow field vectors in the image frames sequence. The proposed system was evaluated by conducting the experiments in a real environment which consisted of different configuration of the obstacles. The results from the experiment show that the success rate for detecting unpredictable obstacle appearance is high which is 70% and above. Even though some of the introduced obstacles are considered to have poor texture appearances on their surface, the proposed obstacle detection system was still able to detect the correct appearance movement of the obstacles by detecting the edges

    Obstacle Detection for Unmanned Aerial Vehicle (UAV)

    Get PDF
    This study aims to develop an obstacle detection system for unmanned aerial vehicles utilising the ORB feature extraction. In the past, small unmanned aerial vehicles (UAV) were typically equipped with vision-based or range-based sensors. Each sensor in the sensor-based technique possesses different advantages and disadvantages. As a result, the small unmanned aerial vehicle is unable to determine the obstacle's distance or bearing precisely. Due to physical size restrictions and payload capacity, the lightweight Pi Camera and TF Luna LiDAR sensor were selected as the most suitable sensors for integration. In algorithm development and filtration is used to improve the accuracy of the feature matching process, which is required for classifying the obstacle region and free region of any texture obstacle. The experiment was under the environment of OpenCV and Spyder. In real-time experiment, the success rate for good texture(40%), poor texture (55%) and texture-less (45%) The findings indicate that the recommended method works well for detecting textures-less obstacle even though the success rate is only 40% because out of 10 test only one test is fail on detecting free region . The sensor calibration and constructing convex hull for obstacle detection is recommended in future works to improve the efficiency of the obstacle detection system and classified the free region and obstacle region to create safe avoidance path
    • …
    corecore