183 research outputs found

    Playing with Duality: An Overview of Recent Primal-Dual Approaches for Solving Large-Scale Optimization Problems

    Full text link
    Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies which jointly brings into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and non-smooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primal-dual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primal-dual algorithms both for solving large-scale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness

    Full Flow: Optical Flow Estimation By Global Optimization over Regular Grids

    Full text link
    We present a global optimization approach to optical flow estimation. The approach optimizes a classical optical flow objective over the full space of mappings between discrete grids. No descriptor matching is used. The highly regular structure of the space of mappings enables optimizations that reduce the computational complexity of the algorithm's inner loop from quadratic to linear and support efficient matching of tens of thousands of nodes to tens of thousands of displacements. We show that one-shot global optimization of a classical Horn-Schunck-type objective over regular grids at a single resolution is sufficient to initialize continuous interpolation and achieve state-of-the-art performance on challenging modern benchmarks.Comment: To be presented at CVPR 201

    Discrete Visual Perception

    Get PDF
    International audienceComputational vision and biomedical image have made tremendous progress of the past decade. This is mostly due the development of efficient learning and inference algorithms which allow better, faster and richer modeling of visual perception tasks. Graph-based representations are among the most prominent tools to address such perception through the casting of perception as a graph optimization problem. In this paper, we briefly introduce the interest of such representations, discuss their strength and limitations and present their application to address a variety of problems in computer vision and biomedical image analysis

    Measuring Uncertainty in Graph Cut Solutions

    Get PDF
    In recent years graph cuts have become a popular tool for performing inference in Markov and conditional random fields. In this context the question arises as to whether it might be possible to compute a measure of uncertainty associated with the graph cut solutions. In this paper we answer this particular question by showing how the min-marginals associated with the label assignments of a random field can be efficiently computed using a new algorithm based on dynamic graph cuts. The min-marginal energies obtained by our proposed algorithm are exact, as opposed to the ones obtained from other inference algorithms like loopy belief propagation and generalized belief propagation. The paper also shows how min-marginals can be used for parameter learning in conditional random fields

    Discrete and Continuous Optimization for Motion Estimation

    Get PDF
    The study of motion estimation reaches back decades and has become one of the central topics of research in computer vision. Even so, there are situations where current approaches fail, such as when there are extreme lighting variations, significant occlusions, or very large motions. In this thesis, we propose several approaches to address these issues. First, we propose a novel continuous optimization framework for estimating optical flow based on a decomposition of the image domain into triangular facets. We show how this allows for occlusions to be easily and naturally handled within our optimization framework without any post-processing. We also show that a triangular decomposition enables us to use a direct Cholesky decomposition to solve the resulting linear systems by reducing its memory requirements. Second, we introduce a simple method for incorporating additional temporal information into optical flow using inertial estimates of the flow, which leads to a significant reduction in error. We evaluate our methods on several datasets and achieve state-of-the-art results on MPI-Sintel. Finally, we introduce a discrete optimization framework for optical flow computation. Discrete approaches have generally been avoided in optical flow because of the relatively large label space that makes them computationally expensive. In our approach, we use recent advances in image segmentation to build a tree-structured graphical model that conforms to the image content. We show how the optimal solution to these discrete optical flow problems can be computed efficiently by making use of optimization methods from the object recognition literature, even for large images with hundreds of thousands of labels

    Stochastic Variational Inference with Gradient Linearization

    Full text link
    Variational inference has experienced a recent surge in popularity owing to stochastic approaches, which have yielded practical tools for a wide range of model classes. A key benefit is that stochastic variational inference obviates the tedious process of deriving analytical expressions for closed-form variable updates. Instead, one simply needs to derive the gradient of the log-posterior, which is often much easier. Yet for certain model classes, the log-posterior itself is difficult to optimize using standard gradient techniques. One such example are random field models, where optimization based on gradient linearization has proven popular, since it speeds up convergence significantly and can avoid poor local optima. In this paper we propose stochastic variational inference with gradient linearization (SVIGL). It is similarly convenient as standard stochastic variational inference - all that is required is a local linearization of the energy gradient. Its benefit over stochastic variational inference with conventional gradient methods is a clear improvement in convergence speed, while yielding comparable or even better variational approximations in terms of KL divergence. We demonstrate the benefits of SVIGL in three applications: Optical flow estimation, Poisson-Gaussian denoising, and 3D surface reconstruction.Comment: To appear at CVPR 201

    Advances in Character Recognition

    Get PDF
    This book presents advances in character recognition, and it consists of 12 chapters that cover wide range of topics on different aspects of character recognition. Hopefully, this book will serve as a reference source for academic research, for professionals working in the character recognition field and for all interested in the subject
    • …
    corecore