11 research outputs found

    Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition

    Get PDF
    This paper focuses on multi-scale approaches for variational methods and corresponding gradient flows. Recently, for convex regularization functionals such as total variation, new theory and algorithms for nonlinear eigenvalue problems via nonlinear spectral decompositions have been developed. Those methods open new directions for advanced image filtering. However, for an effective use in image segmentation and shape decomposition, a clear interpretation of the spectral response regarding size and intensity scales is needed but lacking in current approaches. In this context, L1L^1 data fidelities are particularly helpful due to their interesting multi-scale properties such as contrast invariance. Hence, the novelty of this work is the combination of L1L^1-based multi-scale methods with nonlinear spectral decompositions. We compare L1L^1 with L2L^2 scale-space methods in view of spectral image representation and decomposition. We show that the contrast invariant multi-scale behavior of L1−TVL^1-TV promotes sparsity in the spectral response providing more informative decompositions. We provide a numerical method and analyze synthetic and biomedical images at which decomposition leads to improved segmentation.Comment: 13 pages, 7 figures, conference SSVM 201

    Convergence Rates for Exponentially Ill-Posed Inverse Problems with Impulsive Noise

    Full text link
    This paper is concerned with exponentially ill-posed operator equations with additive impulsive noise on the right hand side, i.e. the noise is large on a small part of the domain and small or zero outside. It is well known that Tikhonov regularization with an L1L^1 data fidelity term outperforms Tikhonov regularization with an L2L^2 fidelity term in this case. This effect has recently been explained and quantified for the case of finitely smoothing operators. Here we extend this analysis to the case of infinitely smoothing forward operators under standard Sobolev smoothness assumptions on the solution, i.e. exponentially ill-posed inverse problems. It turns out that high order polynomial rates of convergence in the size of the support of large noise can be achieved rather than the poor logarithmic convergence rates typical for exponentially ill-posed problems. The main tools of our analysis are Banach spaces of analytic functions and interpolation-type inequalities for such spaces. We discuss two examples, the (periodic) backwards heat equation and an inverse problem in gradiometry.Comment: to appear in SIAM J. Numer. Ana

    There Are Thin Minimizers of the L 1

    Get PDF
    We show the surprising results that while the local reach of the boundary of an 1 minimizer is bounded below by 1/, the global reach can be smaller. We do this by demonstrating several example minimizing sets not equal to the union of the 1/-balls they contain

    The jump set under geometric regularisation. Part 1: Basic technique and first-order denoising

    Full text link
    Let u \in \mbox{BV}(\Omega) solve the total variation denoising problem with L2L^2-squared fidelity and data ff. Caselles et al. [Multiscale Model. Simul. 6 (2008), 879--894] have shown the containment Hm−1(Ju∖Jf)=0\mathcal{H}^{m-1}(J_u \setminus J_f)=0 of the jump set JuJ_u of uu in that of ff. Their proof unfortunately depends heavily on the co-area formula, as do many results in this area, and as such is not directly extensible to higher-order, curvature-based, and other advanced geometric regularisers, such as total generalised variation (TGV) and Euler's elastica. These have received increased attention in recent times due to their better practical regularisation properties compared to conventional total variation or wavelets. We prove analogous jump set containment properties for a general class of regularisers. We do this with novel Lipschitz transformation techniques, and do not require the co-area formula. In the present Part 1 we demonstrate the general technique on first-order regularisers, while in Part 2 we will extend it to higher-order regularisers. In particular, we concentrate in this part on TV and, as a novelty, Huber-regularised TV. We also demonstrate that the technique would apply to non-convex TV models as well as the Perona-Malik anisotropic diffusion, if these approaches were well-posed to begin with

    A COMPUTATIONAL FRAMEWORK FOR EDGE-PRESERVING REGULARIZATION IN DYNAMIC INVERSE PROBLEMS

    Get PDF
    We devise efficient methods for dynamic inverse problems, where both the quantities of interest and the forward operator (measurement process) may change in time. Our goal is to solve for all the quantities of interest simultaneously. We consider large-scale ill-posed problems made more challenging by their dynamic nature and, possibly, by the limited amount of available data per measurement step. To alleviate these difficulties, we apply a unified class of regularization methods that enforce simultaneous regularization in space and time (such as edge enhancement at each time instant and proximity at consecutive time instants) and achieve this with low computational cost and enhanced accuracy. More precisely, we develop iterative methods based on a majorization-minimization (MM) strategy with quadratic tangent majorant, which allows the resulting least-squares problem with a total variation regularization term to be solved with a generalized Krylov subspace (GKS) method; the regularization parameter can be determined automatically and efficiently at each iteration. Numerical examples from a wide range of applications, such as limited-angle computerized tomography (CT), space-time image deblurring, and photoacoustic tomography (PAT), illustrate the effectiveness of the described approaches.</p

    A Semismooth Newton Method for Nonlinear Parameter Identification Problems with Impulsive Noise

    Get PDF

    Image reconstruction under non-Gaussian noise

    Get PDF

    Generating structured non-smooth priors and associated primal-dual methods

    Get PDF
    The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided
    corecore