931 research outputs found
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Homography-based ground plane detection using a single on-board camera
This study presents a robust method for ground plane detection in vision-based systems with a non-stationary camera. The proposed method is based on the reliable estimation of the homography between ground planes in successive images. This homography is computed using a feature matching approach, which in contrast to classical approaches to on-board motion estimation does not require explicit ego-motion calculation. As opposed to it, a novel homography calculation method based on a linear estimation framework is presented. This framework provides predictions of the ground plane transformation matrix that are dynamically updated with new measurements. The method is specially suited for challenging environments, in particular traffic scenarios, in which the information is scarce and the homography computed from the images is usually inaccurate or erroneous. The proposed estimation framework is able to remove erroneous measurements and to correct those that are inaccurate, hence producing a reliable homography estimate at each instant. It is based on the evaluation of the difference between the predicted and the observed transformations, measured according to the spectral norm of the associated matrix of differences. Moreover, an example is provided on how to use the information extracted from ground plane estimation to achieve object detection and tracking. The method has been successfully demonstrated for the detection of moving vehicles in traffic environments
Stochastic Occupancy Grid Map Prediction in Dynamic Scenes
This paper presents two variations of a novel stochastic prediction algorithm
that enables mobile robots to accurately and robustly predict the future state
of complex dynamic scenes. The proposed algorithm uses a variational
autoencoder to predict a range of possible future states of the environment.
The algorithm takes full advantage of the motion of the robot itself, the
motion of dynamic objects, and the geometry of static objects in the scene to
improve prediction accuracy. Three simulated and real-world datasets collected
by different robot models are used to demonstrate that the proposed algorithm
is able to achieve more accurate and robust prediction performance than other
prediction algorithms. Furthermore, a predictive uncertainty-aware planner is
proposed to demonstrate the effectiveness of the proposed predictor in
simulation and real-world navigation experiments. Implementations are open
source at https://github.com/TempleRAIL/SOGMP.Comment: Accepted by 7th Annual Conference on Robot Learning (CoRL), 202
高速ビジョンを用いたリアルタイムビデオモザイキングと安定化に関する研究
広島大学(Hiroshima University)博士(工学)Doctor of Engineeringdoctora
A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics
We survey the current state of millimeterwave (mmWave) radar applications in
robotics with a focus on unique capabilities, and discuss future opportunities
based on the state of the art. Frequency Modulated Continuous Wave (FMCW)
mmWave radars operating in the 76--81GHz range are an appealing alternative to
lidars, cameras and other sensors operating in the near visual spectrum. Radar
has been made more widely available in new packaging classes, more convenient
for robotics and its longer wavelengths have the ability to bypass visual
clutter such as fog, dust, and smoke. We begin by covering radar principles as
they relate to robotics. We then review the relevant new research across a
broad spectrum of robotics applications beginning with motion estimation,
localization, and mapping. We then cover object detection and classification,
and then close with an analysis of current datasets and calibration techniques
that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin
- …