436 research outputs found

    Non-Convex and Geometric Methods for Tomography and Label Learning

    Get PDF
    Data labeling is a fundamental problem of mathematical data analysis in which each data point is assigned exactly one single label (prototype) from a finite predefined set. In this thesis we study two challenging extensions, where either the input data cannot be observed directly or prototypes are not available beforehand. The main application of the first setting is discrete tomography. We propose several non-convex variational as well as smooth geometric approaches to joint image label assignment and reconstruction from indirect measurements with known prototypes. In particular, we consider spatial regularization of assignments, based on the KL-divergence, which takes into account the smooth geometry of discrete probability distributions endowed with the Fisher-Rao (information) metric, i.e. the assignment manifold. Finally, the geometric point of view leads to a smooth flow evolving on a Riemannian submanifold including the tomographic projection constraints directly into the geometry of assignments. Furthermore we investigate corresponding implicit numerical schemes which amount to solving a sequence of convex problems. Likewise, for the second setting, when the prototypes are absent, we introduce and study a smooth dynamical system for unsupervised data labeling which evolves by geometric integration on the assignment manifold. Rigorously abstracting from ``data-label'' to ``data-data'' decisions leads to interpretable low-rank data representations, which themselves are parameterized by label assignments. The resulting self-assignment flow simultaneously performs learning of latent prototypes in the very same framework while they are used for inference. Moreover, a single parameter, the scale of regularization in terms of spatial context, drives the entire process. By smooth geodesic interpolation between different normalizations of self-assignment matrices on the positive definite matrix manifold, a one-parameter family of self-assignment flows is defined. Accordingly, the proposed approach can be characterized from different viewpoints such as discrete optimal transport, normalized spectral cuts and combinatorial optimization by completely positive factorizations, each with additional built-in spatial regularization

    Riemannian Flows for Supervised and Unsupervised Geometric Image Labeling

    Get PDF
    In this thesis we focus on the image labeling problem, which is used as a subroutine in many image processing applications. Our work is based on the assignment flow which was recently introduced as a novel geometric approach to the image labeling problem. This flow evolves over time on the manifold of row-stochastic matrices, whose elements represent label assignments as assignment probabilities. The strict separation of assignment manifold and feature space enables the data to lie in any metric space, while a smoothing operation on the assignment manifold results in an unbiased and spatially regularized labeling. The first part of this work focuses on theoretical statements about the asymptotic behavior of the assignment flow. We show under weak assumptions on the parameters that the assignment flow for data in general position converges towards integral probabilities and thus ensures unique assignment decisions. Furthermore, we investigate the stability of possible limit points depending on the input data and parameters. For stable limits, we derive conditions that allow early evidence of convergence towards these limits and thus provide convergence guarantees. In the second part, we extend the assignment flow approach in order to impose global convex constraints on the labeling results based on linear filter statistics of the assignments. The corresponding filters are learned from examples using an eigendecomposition. The effectiveness of the approach is numerically demonstrated in several academic labeling scenarios. In the last part of this thesis we consider the situation in which no labels are given and therefore these prototypical elements have to be determined from the data as well. To this end we introduce an additional flow on the feature manifold, which is coupled to the assignment flow. The resulting flow adapts the prototypes in time to the assignment probabilities. The simultaneous adaptation and assignment of prototypes not only provides suitable prototypes, but also improves the resulting image segmentation, which is demonstrated by experiments. For this approach it is assumed that the data lie on a Riemannian manifold. We elaborate the approach for a range of manifolds that occur in applications and evaluate the resulting approaches in numerical experiments

    Variational Approaches for Image Labeling on the Assignment Manifold

    Get PDF
    The image labeling problem refers to the task of assigning to each pixel a single element from a finite predefined set of labels. In classical approaches the labeling task is formulated as a minimization problem of specifically structured objective functions. Assignment flows for contextual image labeling are a recently proposed alternative formulation via spatially coupled replicator equations. In this work, the classical and dynamical viewpoint of image labeling are combined into a variational formulation. This is accomplished by following the induced Riemannian gradient descent flow on an elementary statistical manifold with respect to the underlying information geometry. Convergence and stability behavior of this approach are investigated using the log-barrier method. A novel parameterization of the assignment flow by its dominant component is derived, revealing a Riemannian gradient flow structure that clearly identifies the two governing processes of the flow: spatial regularization of assignments and gradual enforcement of unambiguous label decisions. Also, a continuous-domain formulation of the corresponding potential is presented and well-posedness of the related optimization problem is established. Furthermore, an alternative smooth variational approach to maximum a-posteriori inference based on discrete graphical models is derived by utilizing local Wasserstein distances. Following the resulting Riemannian gradient flow leads to an inference process which always satisfies the local marginalization constraints and incorporates a smooth rounding mechanism towards unambiguous assignments

    Inference and Model Parameter Learning for Image Labeling by Geometric Assignment

    Get PDF
    Image labeling is a fundamental problem in the area of low-level image analysis. In this work, we present novel approaches to maximum a posteriori (MAP) inference and model parameter learning for image labeling, respectively. Both approaches are formulated in a smooth geometric setting, whose respective solution space is a simple Riemannian manifold. Optimization consists of multiplicative updates that geometrically integrate the resulting Riemannian gradient flow. Our novel approach to MAP inference is based on discrete graphical models. By utilizing local Wasserstein distances for coupling assignment measures across edges of the underlying graph, we smoothly approximate a given discrete objective function and restrict it to the assignment manifold. A corresponding update scheme combines geometric integration of the resulting gradient flow, and rounding to integral solutions that represent valid labelings. This formulation constitutes an inner relaxation of the discrete labeling problem, i.e. throughout this process local marginalization constraints known from the established linear programming relaxation are satisfied. Furthermore, we study the inverse problem of model parameter learning using the linear assignment flow and training data with ground truth. This is accomplished by a Riemannian gradient flow on the manifold of parameters that determine the regularization properties of the assignment flow. This smooth formulation enables us to tackle the model parameter learning problem from the perspective of parameter estimation of dynamical systems. By using symplectic partitioned Runge--Kutta methods for numerical integration, we show that deriving the sensitivity conditions of the parameter learning problem and its discretization commute. A favorable property of our approach is that learning is based on exact inference
    • …
    corecore