11,464 research outputs found
Development of stereo matching algorithm based on sum of absolute RGB color differences and gradient Matching
This paper proposes a new stereo matching algorithm which uses local-based method. The Sum of Absolute Differences (SAD) algorithm produces accurate result on the disparity map for the textured regions. However, this algorithm is sensitive to low texture areas and high noise on images with high different brightness and contrast. To get over these problems, the proposed algorithm utilizes SAD algorithm with RGB color channels differences and combination of gradient matching to improve the accuracy on the images with high brightness and contrast. Additionally, an edge-preserving filter is used at the second stage which is known as Bilateral Filter (BF). The BF filter is capable to work with the low texture areas and to reduce the noise and sharpen the images. Additionally, BF is strong against the distortions due to high brightness and contrast. The proposed work in this paper produces accurate results and performs much better compared with some established algorithms. This comparison is based on the standard quantitative measurements using the stereo benchmarking evaluation from the Middlebury
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Disparity Map Algorithm Based on Edge Preserving Filter for Stereo Video Processing
This paper proposes a new local-based stereo matching algorithm for stereo video processing. Fundamentally, the Sum of Absolute Differences (SAD) algorithm produces an accurate results on the stereo video processing for the textured regions. However, this algorithm sensitives to low texture and radiometric distortions (i.e., contrast or brightness). To overcome these problems, the proposed algorithm utilizes edgepreserving filter which is known as Bilateral Filter (BF). The BF algorithm reduces noise and sharpen the images. Additionally, BF works fine on the low or plain texture areas. The proposed algorithm produces an accurate results and performs much better compared to some established algorithms on the standard benchmarking results of the Middlebury and KITTI dataset
A Robust Quasi-dense Matching Approach for Underwater Images
While different techniques for finding dense correspondences in images taken in air have achieved significant success, application of these techniques to underwater imagery still presents a serious challenge, especially in the case of “monocular stereo” when images constituting a stereo pair are acquired asynchronously. This is generally because of the poor image quality which is inherent to imaging in aquatic environments (blurriness, range-dependent brightness and color variations, time-varying water column disturbances, etc.). The goal of this research is to develop a technique resulting in maximal number of successful matches (conjugate points) in two overlapping images. We propose a quasi-dense matching approach which works reliably for underwater imagery. The proposed approach starts with a sparse set of highly robust matches (seeds) and expands pair-wise matches into their neighborhoods. The Adaptive Least Square Matching (ALSM) is used during the search process to establish new matches to increase the robustness of the solution and avoid mismatches. Experiments on a typical underwater image dataset demonstrate promising results
Maximum likelihood estimation of cloud height from multi-angle satellite imagery
We develop a new estimation technique for recovering depth-of-field from
multiple stereo images. Depth-of-field is estimated by determining the shift in
image location resulting from different camera viewpoints. When this shift is
not divisible by pixel width, the multiple stereo images can be combined to
form a super-resolution image. By modeling this super-resolution image as a
realization of a random field, one can view the recovery of depth as a
likelihood estimation problem. We apply these modeling techniques to the
recovery of cloud height from multiple viewing angles provided by the MISR
instrument on the Terra Satellite. Our efforts are focused on a two layer cloud
ensemble where both layers are relatively planar, the bottom layer is optically
thick and textured, and the top layer is optically thin. Our results
demonstrate that with relative ease, we get comparable estimates to the M2
stereo matcher which is the same algorithm used in the current MISR standard
product (details can be found in [IEEE Transactions on Geoscience and Remote
Sensing 40 (2002) 1547--1559]). Moreover, our techniques provide the
possibility of modeling all of the MISR data in a unified way for cloud height
estimation. Research is underway to extend this framework for fast, quality
global estimates of cloud height.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS243 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Disparity and Optical Flow Partitioning Using Extended Potts Priors
This paper addresses the problems of disparity and optical flow partitioning
based on the brightness invariance assumption. We investigate new variational
approaches to these problems with Potts priors and possibly box constraints.
For the optical flow partitioning, our model includes vector-valued data and an
adapted Potts regularizer. Using the notation of asymptotically level stable
functions we prove the existence of global minimizers of our functionals. We
propose a modified alternating direction method of minimizers. This iterative
algorithm requires the computation of global minimizers of classical univariate
Potts problems which can be done efficiently by dynamic programming. We prove
that the algorithm converges both for the constrained and unconstrained
problems. Numerical examples demonstrate the very good performance of our
partitioning method
Geometric-based Line Segment Tracking for HDR Stereo Sequences
In this work, we propose a purely geometrical approach for the robust matching of line segments for challenging stereo streams with severe illumination changes or High Dynamic Range (HDR) environments. To that purpose, we exploit the univocal nature of the matching problem, i.e. every observation must be corresponded with a single feature or not corresponded at all. We state the problem as a sparse, convex, `1-minimization of the matching vector regularized by the geometric constraints. This formulation allows for the robust tracking of line segments along sequences where traditional appearance-based matching techniques tend to fail due to dynamic changes in illumination conditions. Moreover, the proposed matching algorithm also results in a considerable speed-up of previous state of the art techniques making it suitable for real-time applications such as Visual Odometry (VO). This, of course, comes at expense of a slightly lower number of matches in comparison with appearance based methods, and also limits its application to continuous video sequences, as it is rather constrained to small pose increments between consecutive frames.We validate the claimed advantages by first evaluating the matching performance in challenging video sequences, and then testing the method in a benchmarked point and line based VO algorithm.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tech.This work has been supported by the Spanish Government (project DPI2017-84827-R and grant BES-2015-071606) and by the Andalucian Government (project TEP2012-530)
- …