47 research outputs found

    Dirichlet Densifiers for Improved Commute Times Estimation

    Get PDF
    In this paper, we develop a novel Dirichlet densifier that can be used to increase the edge density in undirected graphs. Dirichlet densifiers are implicit minimizers of the spectral gap for the Laplacian spectrum of a graph. One consequence of this property is that they can be used improve the estimation of meaningful commute distances for mid-size graphs by means of topological modifications of the original graphs. This results in a better performance in clustering and ranking. To do this, we identify the strongest edges and from them construct the so called line graph, where the nodes are the potential q −step reachable edges in the original graph. These strongest edges are assumed to be stable. By simulating random walks on the line graph, we identify potential new edges in the original graph. This approach is fully unsupervised and it is both more scalable and robust than recent explicit spectral methods, such as the Semi-Definite Programming (SDP) densifier and the sufficient condition for decreasing the spectral gap. Experiments show that our method is only outperformed by some choices of the parameters of a related method, the anchor graph, which relies on pre-computing clusters representatives, and that the proposed method is effective on a variety of real-world datasets.M. Curado, F. Escolano and M.A. Lozano are funded by the projects TIN2015-69077-P and BES2013-064482 of the Spanish Government

    2D Phase Unwrapping via Graph Cuts

    Get PDF
    Phase imaging technologies such as interferometric synthetic aperture radar (InSAR), magnetic resonance imaging (MRI), or optical interferometry, are nowadays widespread and with an increasing usage. The so-called phase unwrapping, which consists in the in- ference of the absolute phase from the modulo-2π phase, is a critical step in many of their processing chains, yet still one of its most challenging problems. We introduce an en- ergy minimization based approach to 2D phase unwrapping. In this approach we address the problem by adopting a Bayesian point of view and a Markov random field (MRF) to model the phase. The maximum a posteriori estimation of the absolute phase gives rise to an integer optimization problem, for which we introduce a family of efficient algo- rithms based on existing graph cuts techniques. We term our approach and algorithms PUMA, for Phase Unwrapping MAx flow. As long as the prior potential of the MRF is convex, PUMA guarantees an exact global solution. In particular it solves exactly all the minimum L p norm (p ≥ 1) phase unwrapping problems, unifying in that sense, a set of existing independent algorithms. For non convex potentials we introduce a version of PUMA that, while yielding only approximate solutions, gives very useful phase unwrap- ping results. The main characteristic of the introduced solutions is the ability to blindly preserve discontinuities. Extending the previous versions of PUMA, we tackle denoising by exploiting a multi-precision idea, which allows us to use the same rationale both for phase unwrapping and denoising. Finally, the last presented version of PUMA uses a frequency diversity concept to unwrap phase images having large phase rates. A representative set of experiences illustrates the performance of PUMA

    Variational segmentation problems using prior knowledge in imaging and vision

    Get PDF

    Efficient probabilistic and geometric anatomical mapping using particle mesh approximation on GPUs

    Get PDF
    pre-printDeformable image registration in the presence of considerable contrast differences and large size and shape changes presents significant research challenges. First, it requires a robust registration framework that does not depend on intensity measurements and can handle large nonlinear shape variations. Second, it involves the expensive computation of nonlinear deformations with high degrees of freedom. Often it takes a significant amount of computation time and thus becomes infeasible for practical purposes. In this paper, we present a solution based on two key ideas: a new registration method that generates a mapping between anatomies represented as a multicompartment model of class posterior images and geometries and an implementation of the algorithm using particle mesh approximation on Graphical Processing Units (GPUs) to fulfill the computational requirements. We show results on the registrations of neonatal to 2-year old infant MRIs. Quantitative validation demonstrates that our proposed method generates registrations that better maintain the consistency of anatomical structures over time and provides transformations that better preserve structures undergoing large deformations than transformations obtained by standard intensity-only registration. We also achieve the speedup of three orders of magnitudes compared to a CPU reference implementation, making it possible to use the technique in time-critical applications

    Efficient Probabilistic and Geometric Anatomical Mapping Using Particle Mesh Approximation on GPUs

    Get PDF
    Deformable image registration in the presence of considerable contrast differences and large size and shape changes presents significant research challenges. First, it requires a robust registration framework that does not depend on intensity measurements and can handle large nonlinear shape variations. Second, it involves the expensive computation of nonlinear deformations with high degrees of freedom. Often it takes a significant amount of computation time and thus becomes infeasible for practical purposes. In this paper, we present a solution based on two key ideas: a new registration method that generates a mapping between anatomies represented as a multicompartment model of class posterior images and geometries and an implementation of the algorithm using particle mesh approximation on Graphical Processing Units (GPUs) to fulfill the computational requirements. We show results on the registrations of neonatal to 2-year old infant MRIs. Quantitative validation demonstrates that our proposed method generates registrations that better maintain the consistency of anatomical structures over time and provides transformations that better preserve structures undergoing large deformations than transformations obtained by standard intensity-only registration. We also achieve the speedup of three orders of magnitudes compared to a CPU reference implementation, making it possible to use the technique in time-critical applications

    Combinatorial Solutions for Shape Optimization in Computer Vision

    Get PDF
    This thesis aims at solving so-called shape optimization problems, i.e. problems where the shape of some real-world entity is sought, by applying combinatorial algorithms. I present several advances in this field, all of them based on energy minimization. The addressed problems will become more intricate in the course of the thesis, starting from problems that are solved globally, then turning to problems where so far no global solutions are known. The first two chapters treat segmentation problems where the considered grouping criterion is directly derived from the image data. That is, the respective data terms do not involve any parameters to estimate. These problems will be solved globally. The first of these chapters treats the problem of unsupervised image segmentation where apart from the image there is no other user input. Here I will focus on a contour-based method and show how to integrate curvature regularity into a ratio-based optimization framework. The arising optimization problem is reduced to optimizing over the cycles in a product graph. This problem can be solved globally in polynomial, effectively linear time. As a consequence, the method does not depend on initialization and translational invariance is achieved. This is joint work with Daniel Cremers and Simon Masnou. I will then proceed to the integration of shape knowledge into the framework, while keeping translational invariance. This problem is again reduced to cycle-finding in a product graph. Being based on the alignment of shape points, the method actually uses a more sophisticated shape measure than most local approaches and still provides global optima. It readily extends to tracking problems and allows to solve some of them in real-time. I will present an extension to highly deformable shape models which can be included in the global optimization framework. This method simultaneously allows to decompose a shape into a set of deformable parts, based only on the input images. This is joint work with Daniel Cremers. In the second part segmentation is combined with so-called correspondence problems, i.e. the underlying grouping criterion is now based on correspondences that have to be inferred simultaneously. That is, in addition to inferring the shapes of objects, one now also tries to put into correspondence the points in several images. The arising problems become more intricate and are no longer optimized globally. This part is divided into two chapters. The first chapter treats the topic of real-time motion segmentation where objects are identified based on the observations that the respective points in the video will move coherently. Rather than pre-estimating motion, a single energy functional is minimized via alternating optimization. The main novelty lies in the real-time capability, which is achieved by exploiting a fast combinatorial segmentation algorithm. The results are furthermore improved by employing a probabilistic data term. This is joint work with Daniel Cremers. The final chapter presents a method for high resolution motion layer decomposition and was developed in combination with Daniel Cremers and Thomas Pock. Layer decomposition methods support the notion of a scene model, which allows to model occlusion and enforce temporal consistency. The contributions are twofold: from a practical point of view the proposed method allows to recover fine-detailed layer images by minimizing a single energy. This is achieved by integrating a super-resolution method into the layer decomposition framework. From a theoretical viewpoint the proposed method introduces layer-based regularity terms as well as a graph cut-based scheme to solve for the layer domains. The latter is combined with powerful continuous convex optimization techniques into an alternating minimization scheme. Lastly I want to mention that a significant part of this thesis is devoted to the recent trend of exploiting parallel architectures, in particular graphics cards: many combinatorial algorithms are easily parallelized. In Chapter 3 we will see a case where the standard algorithm is hard to parallelize, but easy for the respective problem instances

    Manifold Learning for Natural Image Sets, Doctoral Dissertation August 2006

    Get PDF
    The field of manifold learning provides powerful tools for parameterizing high-dimensional data points with a small number of parameters when this data lies on or near some manifold. Images can be thought of as points in some high-dimensional image space where each coordinate represents the intensity value of a single pixel. These manifold learning techniques have been successfully applied to simple image sets, such as handwriting data and a statue in a tightly controlled environment. However, they fail in the case of natural image sets, even those that only vary due to a single degree of freedom, such as a person walking or a heart beating. Parameterizing data sets such as these will allow for additional constraints on traditional computer vision problems such as segmentation and tracking. This dissertation explores the reasons why classical manifold learning algorithms fail on natural image sets and proposes new algorithms for parameterizing this type of data

    Dense Corresspondence Estimation for Image Interpolation

    Get PDF
    We evaluate the current state-of-the-art in dense correspondence estimation for the use in multi-image interpolation algorithms. The evaluation is carried out on three real-world scenes and one synthetic scene, each featuring varying challenges for dense correspondence estimation. The primary focus of our study is on the perceptual quality of the interpolation sequences created from the estimated flow fields. Perceptual plausibility is assessed by means of a psychophysical userstudy. Our results show that current state-of-the-art in dense correspondence estimation does not produce visually plausible interpolations.In diesem Bericht evaluieren wir den gegenwärtigen Stand der Technik in dichter Korrespondenzschätzung hinsichtlich der Eignung für die Nutzung in Algorithmen zur Zwischenbildsynthese. Die Auswertung erfolgt auf drei realen und einer synthetischen Szene mit variierenden Herausforderungen für Algorithmen zur Korrespondenzschätzung. Mittels einer perzeptuellen Benutzerstudie werten wir die wahrgenommene Qualität der interpolierten Bildsequenzen aus. Unsere Ergebnisse zeigen dass der gegenwärtige Stand der Technik in dichter Korrespondezschätzung nicht für die Zwischenbildsynthese geeignet ist
    corecore