216 research outputs found

    A multigrid platform for real-time motion computation with discontinuity-preserving variational methods

    Get PDF
    Variational methods are among the most accurate techniques for estimating the optic flow. They yield dense flow fields and can be designed such that they preserve discontinuities, allow to deal with large displacements and perform well under noise or varying illumination. However, such adaptations render the minimisation of the underlying energy functional very expensive in terms of computational costs: Typically, one or more large linear or nonlinear systems of equations have to be solved in order to obtain the desired solution. Consequently, variational methods are considered to be too slow for real-time performance. In our paper we address this problem in two ways: (i) We present a numerical framework based on bidirectional multigrid methods for accelerating a broad class of variational optic flow methods with different constancy and smoothness assumptions. In particular, discontinuity-preserving regularisation strategies are thereby in the focus of our work. (ii) We show by the examples of classical as well as more advanced variational techniques that real-time performance is possible - even for very complex optic flow models with high accuracy. Experiments show frame rates up to 63 dense flow fields per second for real-world image sequences of size 160 x 120 on a standard PC. Compared to classical iterative methods this constitutes a speedup of two to four orders of magnitude

    Variational optic flow on the Sony PlayStation 3 – accurate dense flow fields for real-time applications

    Get PDF
    While modern variational methods for optic flow computation offer dense flow fields and highly accurate results, their computational complexity has prevented their use in many real-time applications. With cheap modern parallel hardware such as the Sony PlayStation 3 new possibilities arise. For a linear and a nonlinear variant of the popular combined local-global (CLG) method, we present specific algorithms that are tailored towards real-time performance. They are based on bidirectional full multigrid methods with a full approximation scheme (FAS) in the nonlinear setting. Their parallelisation on the Cell hardware uses a temporal instead of a spatial decomposition, and processes operations in a vector-based manner. Memory latencies are reduced by a locality-preserving cache management and optimised access patterns. With images of size 316×252 pixels, we obtain dense flow fields for up to 210 frames per second

    Variational Disparity Estimation Framework for Plenoptic Image

    Full text link
    This paper presents a computational framework for accurately estimating the disparity map of plenoptic images. The proposed framework is based on the variational principle and provides intrinsic sub-pixel precision. The light-field motion tensor introduced in the framework allows us to combine advanced robust data terms as well as provides explicit treatments for different color channels. A warping strategy is embedded in our framework for tackling the large displacement problem. We also show that by applying a simple regularization term and a guided median filtering, the accuracy of displacement field at occluded area could be greatly enhanced. We demonstrate the excellent performance of the proposed framework by intensive comparisons with the Lytro software and contemporary approaches on both synthetic and real-world datasets

    Low-level Vision by Consensus in a Spatial Hierarchy of Regions

    Full text link
    We introduce a multi-scale framework for low-level vision, where the goal is estimating physical scene values from image data---such as depth from stereo image pairs. The framework uses a dense, overlapping set of image regions at multiple scales and a "local model," such as a slanted-plane model for stereo disparity, that is expected to be valid piecewise across the visual field. Estimation is cast as optimization over a dichotomous mixture of variables, simultaneously determining which regions are inliers with respect to the local model (binary variables) and the correct co-ordinates in the local model space for each inlying region (continuous variables). When the regions are organized into a multi-scale hierarchy, optimization can occur in an efficient and parallel architecture, where distributed computational units iteratively perform calculations and share information through sparse connections between parents and children. The framework performs well on a standard benchmark for binocular stereo, and it produces a distributional scene representation that is appropriate for combining with higher-level reasoning and other low-level cues.Comment: Accepted to CVPR 2015. Project page: http://www.ttic.edu/chakrabarti/consensus

    Enhancing Compressed Sensing 4D Photoacoustic Tomography by Simultaneous Motion Estimation

    Get PDF
    A crucial limitation of current high-resolution 3D photoacoustic tomography (PAT) devices that employ sequential scanning is their long acquisition time. In previous work, we demonstrated how to use compressed sensing techniques to improve upon this: images with good spatial resolution and contrast can be obtained from suitably sub-sampled PAT data acquired by novel acoustic scanning systems if sparsity-constrained image reconstruction techniques such as total variation regularization are used. Now, we show how a further increase of image quality can be achieved for imaging dynamic processes in living tissue (4D PAT). The key idea is to exploit the additional temporal redundancy of the data by coupling the previously used spatial image reconstruction models with sparsity-constrained motion estimation models. While simulated data from a two-dimensional numerical phantom will be used to illustrate the main properties of this recently developed joint-image-reconstruction-and-motion-estimation framework, measured data from a dynamic experimental phantom will also be used to demonstrate their potential for challenging, large-scale, real-world, three-dimensional scenarios. The latter only becomes feasible if a carefully designed combination of tailored optimization schemes is employed, which we describe and examine in more detail

    A survey on variational optic flow methods for small displacements

    Get PDF
    Optic fow describes the displacement field in an image sequence. Its reliable computation constitutes one of the main challenges in computer vision, and variational methods belong to the most successful techniques for achieving this goal. Variational methods recover the optic flow field as a minimiser of a suitable energy functional that involves data and smoothness terms. In this paper we present a survey on different model assumptions for each of these terms and illustrate their impact by experiments. We restrict ourselves to rotationally invariant convex functionals with a linearised data term. Such models are appropriate for small displacements. Regarding the data term, constancy assumptions on the brightness, the gradient, the Hessian, the gradient magnitude, the Laplacian, and the Hessian determinant are investigated. Local integration and nonquadratic penalisation are considered in order to improve robustness under noise. With respect to the smoothness term, we review a recent taxonomy that links regularisers to diffusion processes. It allows to distinguish five types of regularisation strategies: homogeneous, isotropic image-driven, anisotropic image-driven, isotropic flow-driven, and anisotropic flow-driven. All these regularisations can be performed either in the spatial or the spatiotemporal domain. After discussing well-posedness results for convex optic flow functionals, we sketch some numerical ideas in order to achieve realtime performance on a standard PC by means of multigrid methods, and we survey a simple and intuitive confidence measure

    A robust multigrid approach for variational image registration models

    Get PDF
    AbstractVariational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images

    On the application of projection methods for computing optical flow fields

    Get PDF
    Detecting optical flow means to find the apparent displacement field in a sequence of images. As starting point for many optical flow methods serves the so called optical flow constraint (OFC), that is the assumption that the gray value of a moving point does not change over time. Variational methods are amongst the most popular tools to compute the optical flow field. They compute the flow field as minimizer of an energy functional that consists of a data term to comply with the OFC and a smoothness term to obtain uniqueness of this underdetermined problem. In this article we replace the smoothness term by projecting the solution to a finite dimensional, affine subspace in the spatial variables which leads to a smoothing and gives a unique solution as well. We explain the mathematical details for the quadratic and nonquadratic minimization framework, and show how alternative model assumptions such as constancy of the brightness gradient can be incorporated. As basis functions we consider tensor products of B-splines. Under certain smoothness assumptions for the global minimizer in Sobolev scales, we prove optimal convergence rates in terms of the energy functional. Experiments are presented that demonstrate the feasibility of our approach

    Segmentation based variational model for accurate optical flow estimation.

    Get PDF
    Chen, Jianing.Thesis (M.Phil.)--Chinese University of Hong Kong, 2009.Includes bibliographical references (leaves 47-54).Abstract also in Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Related Work --- p.3Chapter 1.3 --- Thesis Organization --- p.5Chapter 2 --- Review on Optical Flow Estimation --- p.6Chapter 2.1 --- Variational Model --- p.6Chapter 2.1.1 --- Basic Assumptions and Constraints --- p.6Chapter 2.1.2 --- More General Energy Functional --- p.9Chapter 2.2 --- Discontinuity Preserving Techniques --- p.9Chapter 2.2.1 --- Data Term Robustification --- p.10Chapter 2.2.2 --- Diffusion Based Regularization --- p.11Chapter 2.2.3 --- Segmentation --- p.15Chapter 2.3 --- Chapter Summary --- p.15Chapter 3 --- Segmentation Based Optical Flow Estimation --- p.17Chapter 3.1 --- Initial Flow --- p.17Chapter 3.2 --- Color-Motion Segmentation --- p.19Chapter 3.3 --- Parametric Flow Estimating Incorporating Segmentation --- p.21Chapter 3.4 --- Confidence Map Construction --- p.24Chapter 3.4.1 --- Occlusion detection --- p.24Chapter 3.4.2 --- Pixel-wise motion coherence --- p.24Chapter 3.4.3 --- Segment-wise model confidence --- p.26Chapter 3.5 --- Final Combined Variational Model --- p.28Chapter 3.6 --- Chapter Summary --- p.28Chapter 4 --- Experiment Results --- p.30Chapter 4.1 --- Quantitative Evaluation --- p.30Chapter 4.2 --- Warping Results --- p.34Chapter 4.3 --- Chapter Summary --- p.35Chapter 5 --- Application - Single Image Animation --- p.37Chapter 5.1 --- Introduction --- p.37Chapter 5.2 --- Approach --- p.38Chapter 5.2.1 --- Pre-Process Stage --- p.39Chapter 5.2.2 --- Coordinate Transform --- p.39Chapter 5.2.3 --- Motion Field Transfer --- p.41Chapter 5.2.4 --- Motion Editing and Apply --- p.41Chapter 5.2.5 --- Gradient-domain composition --- p.42Chapter 5.3 --- Experiments --- p.43Chapter 5.3.1 --- Active Motion Transfer --- p.43Chapter 5.3.2 --- Animate Stationary Temporal Dynamics --- p.44Chapter 5.4 --- Chapter Summary --- p.45Chapter 6 --- Conclusion --- p.46Bibliography --- p.4
    • …
    corecore