1,638 research outputs found

    Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition

    Get PDF
    This paper focuses on multi-scale approaches for variational methods and corresponding gradient flows. Recently, for convex regularization functionals such as total variation, new theory and algorithms for nonlinear eigenvalue problems via nonlinear spectral decompositions have been developed. Those methods open new directions for advanced image filtering. However, for an effective use in image segmentation and shape decomposition, a clear interpretation of the spectral response regarding size and intensity scales is needed but lacking in current approaches. In this context, L1L^1 data fidelities are particularly helpful due to their interesting multi-scale properties such as contrast invariance. Hence, the novelty of this work is the combination of L1L^1-based multi-scale methods with nonlinear spectral decompositions. We compare L1L^1 with L2L^2 scale-space methods in view of spectral image representation and decomposition. We show that the contrast invariant multi-scale behavior of L1−TVL^1-TV promotes sparsity in the spectral response providing more informative decompositions. We provide a numerical method and analyze synthetic and biomedical images at which decomposition leads to improved segmentation.Comment: 13 pages, 7 figures, conference SSVM 201

    Strong laws of large numbers for sub-linear expectations

    Full text link
    We investigate three kinds of strong laws of large numbers for capacities with a new notion of independently and identically distributed (IID) random variables for sub-linear expectations initiated by Peng. It turns out that these theorems are natural and fairly neat extensions of the classical Kolmogorov's strong law of large numbers to the case where probability measures are no longer additive. An important feature of these strong laws of large numbers is to provide a frequentist perspective on capacities.Comment: 10 page

    Positive approximations of the inverse of fractional powers of SPD M-matrices

    Full text link
    This study is motivated by the recent development in the fractional calculus and its applications. During last few years, several different techniques are proposed to localize the nonlocal fractional diffusion operator. They are based on transformation of the original problem to a local elliptic or pseudoparabolic problem, or to an integral representation of the solution, thus increasing the dimension of the computational domain. More recently, an alternative approach aimed at reducing the computational complexity was developed. The linear algebraic system Aαu=f\cal A^\alpha \bf u=\bf f, 0<α<10< \alpha <1 is considered, where A\cal A is a properly normalized (scalded) symmetric and positive definite matrix obtained from finite element or finite difference approximation of second order elliptic problems in Ω⊂Rd\Omega\subset\mathbb{R}^d, d=1,2,3d=1,2,3. The method is based on best uniform rational approximations (BURA) of the function tβ−αt^{\beta-\alpha} for 0<t≤10 < t \le 1 and natural β\beta. The maximum principles are among the major qualitative properties of linear elliptic operators/PDEs. In many studies and applications, it is important that such properties are preserved by the selected numerical solution method. In this paper we present and analyze the properties of positive approximations of A−α\cal A^{-\alpha} obtained by the BURA technique. Sufficient conditions for positiveness are proven, complemented by sharp error estimates. The theoretical results are supported by representative numerical tests

    Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau Functional Minimization

    Full text link
    We present a graph-based variational algorithm for classification of high-dimensional data, generalizing the binary diffuse interface model to the case of multiple classes. Motivated by total variation techniques, the method involves minimizing an energy functional made up of three terms. The first two terms promote a stepwise continuous classification function with sharp transitions between classes, while preserving symmetry among the class labels. The third term is a data fidelity term, allowing us to incorporate prior information into the model in a semi-supervised framework. The performance of the algorithm on synthetic data, as well as on the COIL and MNIST benchmark datasets, is competitive with state-of-the-art graph-based multiclass segmentation methods.Comment: 16 pages, to appear in Springer's Lecture Notes in Computer Science volume "Pattern Recognition Applications and Methods 2013", part of series on Advances in Intelligent and Soft Computin

    Nonlinear spectral image fusion

    Get PDF
    In this paper we demonstrate that the framework of nonlinear spectral decompositions based on total variation (TV) regularization is very well suited for image fusion as well as more general image manipulation tasks. The well-localized and edge-preserving spectral TV decomposition allows to select frequencies of a certain image to transfer particular features, such as wrinkles in a face, from one image to another. We illustrate the effectiveness of the proposed approach in several numerical experiments, including a comparison to the competing techniques of Poisson image editing, linear osmosis, wavelet fusion and Laplacian pyramid fusion. We conclude that the proposed spectral TV image decomposition framework is a valuable tool for semi- and fully-automatic image editing and fusion

    Learning filter functions in regularisers by minimising quotients

    Get PDF
    Learning approaches have recently become very popular in the field of inverse problems. A large variety of methods has been established in recent years, ranging from bi-level learning to high-dimensional machine learning techniques. Most learning approaches, however, only aim at fitting parametrised models to favourable training data whilst ignoring misfit training data completely. In this paper, we follow up on the idea of learning parametrised regularisation functions by quotient minimisation as established in [3]. We extend the model therein to include higher-dimensional filter functions to be learned and allow for fit- and misfit-training data consisting of multiple functions. We first present results resembling behaviour of well-established derivative-based sparse regularisers like total variation or higher-order total variation in one-dimension. Our second and main contribution is the introduction of novel families of non-derivative-based regularisers. This is accomplished by learning favourable scales and geometric properties while at the same time avoiding unfavourable ones

    Hodge Theory on Metric Spaces

    Get PDF
    Hodge theory is a beautiful synthesis of geometry, topology, and analysis, which has been developed in the setting of Riemannian manifolds. On the other hand, spaces of images, which are important in the mathematical foundations of vision and pattern recognition, do not fit this framework. This motivates us to develop a version of Hodge theory on metric spaces with a probability measure. We believe that this constitutes a step towards understanding the geometry of vision. The appendix by Anthony Baker provides a separable, compact metric space with infinite dimensional \alpha-scale homology.Comment: appendix by Anthony W. Baker, 48 pages, AMS-LaTeX. v2: final version, to appear in Foundations of Computational Mathematics. Minor changes and addition
    • …
    corecore