16,560 research outputs found
Video Interpolation using Optical Flow and Laplacian Smoothness
Non-rigid video interpolation is a common computer vision task. In this paper
we present an optical flow approach which adopts a Laplacian Cotangent Mesh
constraint to enhance the local smoothness. Similar to Li et al., our approach
adopts a mesh to the image with a resolution up to one vertex per pixel and
uses angle constraints to ensure sensible local deformations between image
pairs. The Laplacian Mesh constraints are expressed wholly inside the optical
flow optimization, and can be applied in a straightforward manner to a wide
range of image tracking and registration problems. We evaluate our approach by
testing on several benchmark datasets, including the Middlebury and Garg et al.
datasets. In addition, we show application of our method for constructing 3D
Morphable Facial Models from dynamic 3D data
Combining Stereo Disparity and Optical Flow for Basic Scene Flow
Scene flow is a description of real world motion in 3D that contains more
information than optical flow. Because of its complexity there exists no
applicable variant for real-time scene flow estimation in an automotive or
commercial vehicle context that is sufficiently robust and accurate. Therefore,
many applications estimate the 2D optical flow instead. In this paper, we
examine the combination of top-performing state-of-the-art optical flow and
stereo disparity algorithms in order to achieve a basic scene flow. On the
public KITTI Scene Flow Benchmark we demonstrate the reasonable accuracy of the
combination approach and show its speed in computation.Comment: Commercial Vehicle Technology Symposium (CVTS), 201
Multi-Scale 3D Scene Flow from Binocular Stereo Sequences
Scene flow methods estimate the three-dimensional motion field for points in the world, using multi-camera video data. Such methods combine multi-view reconstruction with motion estimation. This paper describes an alternative formulation for dense scene flow estimation that provides reliable results using only two cameras by fusing stereo and optical flow estimation into a single coherent framework. Internally, the proposed algorithm generates probability distributions for optical flow and disparity. Taking into account the uncertainty in the intermediate stages allows for more reliable estimation of the 3D scene flow than previous methods allow. To handle the aperture problems inherent in the estimation of optical flow and disparity, a multi-scale method along with a novel region-based technique is used within a regularized solution. This combined approach both preserves discontinuities and prevents over-regularization – two problems commonly associated with the basic multi-scale approaches. Experiments with synthetic and real test data demonstrate the strength of the proposed approach.National Science Foundation (CNS-0202067, IIS-0208876); Office of Naval Research (N00014-03-1-0108
SurfelWarp: Efficient Non-Volumetric Single View Dynamic Reconstruction
We contribute a dense SLAM system that takes a live stream of depth images as
input and reconstructs non-rigid deforming scenes in real time, without
templates or prior models. In contrast to existing approaches, we do not
maintain any volumetric data structures, such as truncated signed distance
function (TSDF) fields or deformation fields, which are performance and memory
intensive. Our system works with a flat point (surfel) based representation of
geometry, which can be directly acquired from commodity depth sensors. Standard
graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for
all central operations: i.e., nearest neighbor maintenance, non-rigid
deformation field estimation and fusion of depth measurements. Our pipeline
inherently avoids expensive volumetric operations such as marching cubes,
volumetric fusion and dense deformation field update, leading to significantly
improved performance. Furthermore, the explicit and flexible surfel based
geometry representation enables efficient tackling of topology changes and
tracking failures, which makes our reconstructions consistent with updated
depth observations. Our system allows robots to maintain a scene description
with non-rigidly deformed objects that potentially enables interactions with
dynamic working environments.Comment: RSS 2018. The video and source code are available on
https://sites.google.com/view/surfelwarp/hom
Optical Flow in Mostly Rigid Scenes
The optical flow of natural scenes is a combination of the motion of the
observer and the independent motion of objects. Existing algorithms typically
focus on either recovering motion and structure under the assumption of a
purely static world or optical flow for general unconstrained scenes. We
combine these approaches in an optical flow algorithm that estimates an
explicit segmentation of moving objects from appearance and physical
constraints. In static regions we take advantage of strong constraints to
jointly estimate the camera motion and the 3D structure of the scene over
multiple frames. This allows us to also regularize the structure instead of the
motion. Our formulation uses a Plane+Parallax framework, which works even under
small baselines, and reduces the motion estimation to a one-dimensional search
problem, resulting in more accurate estimation. In moving regions the flow is
treated as unconstrained, and computed with an existing optical flow method.
The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art
results on both the MPI-Sintel and KITTI-2015 benchmarks.Comment: 15 pages, 10 figures; accepted for publication at CVPR 201
- …