1,084 research outputs found
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
A Joint 3D-2D based Method for Free Space Detection on Roads
In this paper, we address the problem of road segmentation and free space
detection in the context of autonomous driving. Traditional methods either use
3-dimensional (3D) cues such as point clouds obtained from LIDAR, RADAR or
stereo cameras or 2-dimensional (2D) cues such as lane markings, road
boundaries and object detection. Typical 3D point clouds do not have enough
resolution to detect fine differences in heights such as between road and
pavement. Image based 2D cues fail when encountering uneven road textures such
as due to shadows, potholes, lane markings or road restoration. We propose a
novel free road space detection technique combining both 2D and 3D cues. In
particular, we use CNN based road segmentation from 2D images and plane/box
fitting on sparse depth data obtained from SLAM as priors to formulate an
energy minimization using conditional random field (CRF), for road pixels
classification. While the CNN learns the road texture and is unaffected by
depth boundaries, the 3D information helps in overcoming texture based
classification failures. Finally, we use the obtained road segmentation with
the 3D depth data from monocular SLAM to detect the free space for the
navigation purposes. Our experiments on KITTI odometry dataset, Camvid dataset,
as well as videos captured by us, validate the superiority of the proposed
approach over the state of the art.Comment: Accepted for publication at IEEE WACV 201
A modified model for the Lobula Giant Movement Detector and its FPGA implementation
The Lobula Giant Movement Detector (LGMD) is a wide-field visual neuron located in the Lobula layer of the Locust nervous system. The LGMD increases its firing rate in response to both the velocity of an approaching object and the proximity of this object. It has been found that it can respond to looming stimuli very quickly and trigger avoidance reactions. It has been successfully applied in
visual collision avoidance systems for vehicles and robots. This paper introduces a modified neural model for LGMD that provides additional depth direction information for the movement. The proposed model retains the simplicity of the previous model by adding only a few new cells. It has been
simplified and implemented on a Field Programmable Gate Array (FPGA), taking advantage of the inherent parallelism exhibited by the LGMD, and tested on real-time video streams. Experimental results demonstrate the effectiveness as a fast motion detector
Near-field Perception for Low-Speed Vehicle Automation using Surround-view Fisheye Cameras
Cameras are the primary sensor in automated driving systems. They provide
high information density and are optimal for detecting road infrastructure cues
laid out for human vision. Surround-view camera systems typically comprise of
four fisheye cameras with 190{\deg}+ field of view covering the entire
360{\deg} around the vehicle focused on near-field sensing. They are the
principal sensors for low-speed, high accuracy, and close-range sensing
applications, such as automated parking, traffic jam assistance, and low-speed
emergency braking. In this work, we provide a detailed survey of such vision
systems, setting up the survey in the context of an architecture that can be
decomposed into four modular components namely Recognition, Reconstruction,
Relocalization, and Reorganization. We jointly call this the 4R Architecture.
We discuss how each component accomplishes a specific aspect and provide a
positional argument that they can be synergized to form a complete perception
system for low-speed automation. We support this argument by presenting results
from previous works and by presenting architecture proposals for such a system.
Qualitative results are presented in the video at https://youtu.be/ae8bCOF77uY.Comment: Accepted for publication at IEEE Transactions on Intelligent
Transportation System
- …