6,759 research outputs found
EpicFlow: Edge-Preserving Interpolation of Correspondences for Optical Flow
We propose a novel approach for optical flow estimation , targeted at large
displacements with significant oc-clusions. It consists of two steps: i) dense
matching by edge-preserving interpolation from a sparse set of matches; ii)
variational energy minimization initialized with the dense matches. The
sparse-to-dense interpolation relies on an appropriate choice of the distance,
namely an edge-aware geodesic distance. This distance is tailored to handle
occlusions and motion boundaries -- two common and difficult issues for optical
flow computation. We also propose an approximation scheme for the geodesic
distance to allow fast computation without loss of performance. Subsequent to
the dense interpolation step, standard one-level variational energy
minimization is carried out on the dense matches to obtain the final flow
estimation. The proposed approach, called Edge-Preserving Interpolation of
Correspondences (EpicFlow) is fast and robust to large displacements. It
significantly outperforms the state of the art on MPI-Sintel and performs on
par on Kitti and Middlebury
Dynamic Denoising of Tracking Sequences
©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TIP.2008.920795In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences.Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement "collaborate" in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over "static" approaches, in which the tracking images are enhanced independently of each other
Better Feature Tracking Through Subspace Constraints
Feature tracking in video is a crucial task in computer vision. Usually, the
tracking problem is handled one feature at a time, using a single-feature
tracker like the Kanade-Lucas-Tomasi algorithm, or one of its derivatives.
While this approach works quite well when dealing with high-quality video and
"strong" features, it often falters when faced with dark and noisy video
containing low-quality features. We present a framework for jointly tracking a
set of features, which enables sharing information between the different
features in the scene. We show that our method can be employed to track
features for both rigid and nonrigid motions (possibly of few moving bodies)
even when some features are occluded. Furthermore, it can be used to
significantly improve tracking results in poorly-lit scenes (where there is a
mix of good and bad features). Our approach does not require direct modeling of
the structure or the motion of the scene, and runs in real time on a single CPU
core.Comment: 8 pages, 2 figures. CVPR 201
- …