4,479 research outputs found
Pushbroom Stereo for High-Speed Navigation in Cluttered Environments
We present a novel stereo vision algorithm that is capable of obstacle
detection on a mobile-CPU processor at 120 frames per second. Our system
performs a subset of standard block-matching stereo processing, searching only
for obstacles at a single depth. By using an onboard IMU and state-estimator,
we can recover the position of obstacles at all other depths, building and
updating a full depth-map at framerate.
Here, we describe both the algorithm and our implementation on a high-speed,
small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system
requires no external sensing or computation and is, to the best of our
knowledge, the first high-framerate stereo detection system running onboard a
small UAV
Deep Lidar CNN to Understand the Dynamics of Moving Vehicles
Perception technologies in Autonomous Driving are experiencing their golden
age due to the advances in Deep Learning. Yet, most of these systems rely on
the semantically rich information of RGB images. Deep Learning solutions
applied to the data of other sensors typically mounted on autonomous cars (e.g.
lidars or radars) are not explored much. In this paper we propose a novel
solution to understand the dynamics of moving vehicles of the scene from only
lidar information. The main challenge of this problem stems from the fact that
we need to disambiguate the proprio-motion of the 'observer' vehicle from that
of the external 'observed' vehicles. For this purpose, we devise a CNN
architecture which at testing time is fed with pairs of consecutive lidar
scans. However, in order to properly learn the parameters of this network,
during training we introduce a series of so-called pretext tasks which also
leverage on image data. These tasks include semantic information about
vehicleness and a novel lidar-flow feature which combines standard image-based
optical flow with lidar scans. We obtain very promising results and show that
including distilled image information only during training, allows improving
the inference results of the network at test time, even when image data is no
longer used.Comment: Presented in IEEE ICRA 2018. IEEE Copyrights: Personal use of this
material is permitted. Permission from IEEE must be obtained for all other
uses. (V2 just corrected comments on arxiv submission
Mobile robot visual navigation based on fuzzy logic and optical flow approaches
This paper presents the design of mobile robot visual navigation system in indoor environment based on fuzzy logic controllers (FLC) and optical flow (OF) approach. The proposed control system contains two Takagi–Sugeno fuzzy logic controllers for obstacle avoidance and goal seeking based on video acquisition and image processing algorithm. The first steering controller uses OF values calculated by Horn–Schunck algorithm to detect and estimate the positions of the obstacles. To extract information about the environment, the image is divided into two parts. The second FLC is used to guide the robot to the direction of the final destination. The efficiency of the proposed approach is verified in simulation using Visual Reality Toolbox. Simulation results demonstrate that the visual based control system allows autonomous navigation without any collision with obstacles.Peer ReviewedPostprint (author's final draft
Real-time on-board obstacle avoidance for UAVs based on embedded stereo vision
In order to improve usability and safety, modern unmanned aerial vehicles
(UAVs) are equipped with sensors to monitor the environment, such as
laser-scanners and cameras. One important aspect in this monitoring process is
to detect obstacles in the flight path in order to avoid collisions. Since a
large number of consumer UAVs suffer from tight weight and power constraints,
our work focuses on obstacle avoidance based on a lightweight stereo camera
setup. We use disparity maps, which are computed from the camera images, to
locate obstacles and to automatically steer the UAV around them. For disparity
map computation we optimize the well-known semi-global matching (SGM) approach
for the deployment on an embedded FPGA. The disparity maps are then converted
into simpler representations, the so called U-/V-Maps, which are used for
obstacle detection. Obstacle avoidance is based on a reactive approach which
finds the shortest path around the obstacles as soon as they have a critical
distance to the UAV. One of the fundamental goals of our work was the reduction
of development costs by closing the gap between application development and
hardware optimization. Hence, we aimed at using high-level synthesis (HLS) for
porting our algorithms, which are written in C/C++, to the embedded FPGA. We
evaluated our implementation of the disparity estimation on the KITTI Stereo
2015 benchmark. The integrity of the overall realtime reactive obstacle
avoidance algorithm has been evaluated by using Hardware-in-the-Loop testing in
conjunction with two flight simulators.Comment: Accepted in the International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Scienc
Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots
Safety is paramount for mobile robotic platforms such as self-driving cars
and unmanned aerial vehicles. This work is devoted to a task that is
indispensable for safety yet was largely overlooked in the past -- detecting
obstacles that are of very thin structures, such as wires, cables and tree
branches. This is a challenging problem, as thin objects can be problematic for
active sensors such as lidar and sonar and even for stereo cameras. In this
work, we propose to use video sequences for thin obstacle detection. We
represent obstacles with edges in the video frames, and reconstruct them in 3D
using efficient edge-based visual odometry techniques. We provide both a
monocular camera solution and a stereo camera solution. The former incorporates
Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter
enjoys a novel, purely vision-based solution. Experiments demonstrated that the
proposed methods are fast and able to detect thin obstacles robustly and
accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …