2,471 research outputs found
Online Mutual Foreground Segmentation for Multispectral Stereo Videos
The segmentation of video sequences into foreground and background regions is
a low-level process commonly used in video content analysis and smart
surveillance applications. Using a multispectral camera setup can improve this
process by providing more diverse data to help identify objects despite adverse
imaging conditions. The registration of several data sources is however not
trivial if the appearance of objects produced by each sensor differs
substantially. This problem is further complicated when parallax effects cannot
be ignored when using close-range stereo pairs. In this work, we present a new
method to simultaneously tackle multispectral segmentation and stereo
registration. Using an iterative procedure, we estimate the labeling result for
one problem using the provisional result of the other. Our approach is based on
the alternating minimization of two energy functions that are linked through
the use of dynamic priors. We rely on the integration of shape and appearance
cues to find proper multispectral correspondences, and to properly segment
objects in low contrast regions. We also formulate our model as a frame
processing pipeline using higher order terms to improve the temporal coherence
of our results. Our method is evaluated under different configurations on
multiple multispectral datasets, and our implementation is available online.Comment: Preprint accepted for publication in IJCV (December 2018
Multi-Scale 3D Scene Flow from Binocular Stereo Sequences
Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108
A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation
Recent work has shown that optical flow estimation can be formulated as a
supervised learning task and can be successfully solved with convolutional
networks. Training of the so-called FlowNet was enabled by a large
synthetically generated dataset. The present paper extends the concept of
optical flow estimation via convolutional networks to disparity and scene flow
estimation. To this end, we propose three synthetic stereo video datasets with
sufficient realism, variation, and size to successfully train large networks.
Our datasets are the first large-scale datasets to enable training and
evaluating scene flow methods. Besides the datasets, we present a convolutional
network for real-time disparity estimation that provides state-of-the-art
results. By combining a flow and disparity estimation network and training it
jointly, we demonstrate the first scene flow estimation with a convolutional
network.Comment: Includes supplementary materia
Scale-Adaptive Neural Dense Features: Learning via Hierarchical Context Aggregation
How do computers and intelligent agents view the world around them? Feature
extraction and representation constitutes one the basic building blocks towards
answering this question. Traditionally, this has been done with carefully
engineered hand-crafted techniques such as HOG, SIFT or ORB. However, there is
no ``one size fits all'' approach that satisfies all requirements. In recent
years, the rising popularity of deep learning has resulted in a myriad of
end-to-end solutions to many computer vision problems. These approaches, while
successful, tend to lack scalability and can't easily exploit information
learned by other systems. Instead, we propose SAND features, a dedicated deep
learning solution to feature extraction capable of providing hierarchical
context information. This is achieved by employing sparse relative labels
indicating relationships of similarity/dissimilarity between image locations.
The nature of these labels results in an almost infinite set of dissimilar
examples to choose from. We demonstrate how the selection of negative examples
during training can be used to modify the feature space and vary it's
properties. To demonstrate the generality of this approach, we apply the
proposed features to a multitude of tasks, each requiring different properties.
This includes disparity estimation, semantic segmentation, self-localisation
and SLAM. In all cases, we show how incorporating SAND features results in
better or comparable results to the baseline, whilst requiring little to no
additional training. Code can be found at:
https://github.com/jspenmar/SAND_featuresComment: CVPR201
Low-level Vision by Consensus in a Spatial Hierarchy of Regions
We introduce a multi-scale framework for low-level vision, where the goal is
estimating physical scene values from image data---such as depth from stereo
image pairs. The framework uses a dense, overlapping set of image regions at
multiple scales and a "local model," such as a slanted-plane model for stereo
disparity, that is expected to be valid piecewise across the visual field.
Estimation is cast as optimization over a dichotomous mixture of variables,
simultaneously determining which regions are inliers with respect to the local
model (binary variables) and the correct co-ordinates in the local model space
for each inlying region (continuous variables). When the regions are organized
into a multi-scale hierarchy, optimization can occur in an efficient and
parallel architecture, where distributed computational units iteratively
perform calculations and share information through sparse connections between
parents and children. The framework performs well on a standard benchmark for
binocular stereo, and it produces a distributional scene representation that is
appropriate for combining with higher-level reasoning and other low-level cues.Comment: Accepted to CVPR 2015. Project page:
http://www.ttic.edu/chakrabarti/consensus
Robust pedestrian detection and tracking in crowded scenes
In this paper, a robust computer vision approach to detecting and tracking pedestrians in unconstrained crowded scenes is presented. Pedestrian detection is performed via a 3D clustering process within a region-growing framework. The clustering process avoids using hard thresholds by using bio-metrically inspired constraints and a number of plan view statistics. Pedestrian tracking is achieved by formulating the track matching process as a weighted bipartite graph and using a Weighted Maximum Cardinality Matching scheme. The approach is evaluated using both indoor and outdoor sequences, captured using a variety of different camera placements and orientations, that feature significant challenges in terms of the number of pedestrians present, their interactions and scene lighting conditions. The evaluation is performed against a manually generated groundtruth for all sequences. Results point to the extremely accurate performance of the proposed approach in all cases
Simultaneous Stereo Video Deblurring and Scene Flow Estimation
Videos for outdoor scene often show unpleasant blur effects due to the large
relative motion between the camera and the dynamic objects and large depth
variations. Existing works typically focus monocular video deblurring. In this
paper, we propose a novel approach to deblurring from stereo videos. In
particular, we exploit the piece-wise planar assumption about the scene and
leverage the scene flow information to deblur the image. Unlike the existing
approach [31] which used a pre-computed scene flow, we propose a single
framework to jointly estimate the scene flow and deblur the image, where the
motion cues from scene flow estimation and blur information could reinforce
each other, and produce superior results than the conventional scene flow
estimation or stereo deblurring methods. We evaluate our method extensively on
two available datasets and achieve significant improvement in flow estimation
and removing the blur effect over the state-of-the-art methods.Comment: Accepted to IEEE International Conference on Computer Vision and
Pattern Recognition (CVPR) 201
- …