5,453 research outputs found
Fast Multi-frame Stereo Scene Flow with Motion Segmentation
We propose a new multi-frame method for efficiently computing scene flow
(dense depth and optical flow) and camera ego-motion for a dynamic scene
observed from a moving stereo camera rig. Our technique also segments out
moving objects from the rigid scene. In our method, we first estimate the
disparity map and the 6-DOF camera motion using stereo matching and visual
odometry. We then identify regions inconsistent with the estimated camera
motion and compute per-pixel optical flow only at these regions. This flow
proposal is fused with the camera motion-based flow proposal using fusion moves
to obtain the final optical flow and motion segmentation. This unified
framework benefits all four tasks - stereo, optical flow, visual odometry and
motion segmentation leading to overall higher accuracy and efficiency. Our
method is currently ranked third on the KITTI 2015 scene flow benchmark.
Furthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3
orders of magnitude faster than the top six methods. We also report a thorough
evaluation on challenging Sintel sequences with fast camera and object motion,
where our method consistently outperforms OSF [Menze and Geiger, 2015], which
is currently ranked second on the KITTI benchmark.Comment: 15 pages. To appear at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). Our results were submitted to KITTI 2015 Stereo
Scene Flow Benchmark in November 201
Low-level Vision by Consensus in a Spatial Hierarchy of Regions
We introduce a multi-scale framework for low-level vision, where the goal is
estimating physical scene values from image data---such as depth from stereo
image pairs. The framework uses a dense, overlapping set of image regions at
multiple scales and a "local model," such as a slanted-plane model for stereo
disparity, that is expected to be valid piecewise across the visual field.
Estimation is cast as optimization over a dichotomous mixture of variables,
simultaneously determining which regions are inliers with respect to the local
model (binary variables) and the correct co-ordinates in the local model space
for each inlying region (continuous variables). When the regions are organized
into a multi-scale hierarchy, optimization can occur in an efficient and
parallel architecture, where distributed computational units iteratively
perform calculations and share information through sparse connections between
parents and children. The framework performs well on a standard benchmark for
binocular stereo, and it produces a distributional scene representation that is
appropriate for combining with higher-level reasoning and other low-level cues.Comment: Accepted to CVPR 2015. Project page:
http://www.ttic.edu/chakrabarti/consensus
Flow-Guided Feature Aggregation for Video Object Detection
Extending state-of-the-art object detectors from image to video is
challenging. The accuracy of detection suffers from degenerated object
appearances in videos, e.g., motion blur, video defocus, rare poses, etc.
Existing work attempts to exploit temporal information on box level, but such
methods are not trained end-to-end. We present flow-guided feature aggregation,
an accurate and end-to-end learning framework for video object detection. It
leverages temporal coherence on feature level instead. It improves the
per-frame features by aggregation of nearby features along the motion paths,
and thus improves the video recognition accuracy. Our method significantly
improves upon strong single-frame baselines in ImageNet VID, especially for
more challenging fast moving objects. Our framework is principled, and on par
with the best engineered systems winning the ImageNet VID challenges 2016,
without additional bells-and-whistles. The proposed method, together with Deep
Feature Flow, powered the winning entry of ImageNet VID challenges 2017. The
code is available at
https://github.com/msracver/Flow-Guided-Feature-Aggregation
- …