177 research outputs found

    Total Generalized Variation for Manifold-valued Data

    Full text link
    In this paper we introduce the notion of second-order total generalized variation (TGV) regularization for manifold-valued data in a discrete setting. We provide an axiomatic approach to formalize reasonable generalizations of TGV to the manifold setting and present two possible concrete instances that fulfill the proposed axioms. We provide well-posedness results and present algorithms for a numerical realization of these generalizations to the manifold setup. Further, we provide experimental results for synthetic and real data to further underpin the proposed generalization numerically and show its potential for applications with manifold-valued data

    A forward-backward splitting algorithm for the minimization of non-smooth convex functionals in Banach space

    Full text link
    We consider the task of computing an approximate minimizer of the sum of a smooth and non-smooth convex functional, respectively, in Banach space. Motivated by the classical forward-backward splitting method for the subgradients in Hilbert space, we propose a generalization which involves the iterative solution of simpler subproblems. Descent and convergence properties of this new algorithm are studied. Furthermore, the results are applied to the minimization of Tikhonov-functionals associated with linear inverse problems and semi-norm penalization in Banach spaces. With the help of Bregman-Taylor-distance estimates, rates of convergence for the forward-backward splitting procedure are obtained. Examples which demonstrate the applicability are given, in particular, a generalization of the iterative soft-thresholding method by Daubechies, Defrise and De Mol to Banach spaces as well as total-variation based image restoration in higher dimensions are presented

    A sparse optimization approach to infinite infimal convolution regularization

    Get PDF
    In this paper we introduce the class of infinite infimal convolution functionals and apply these functionals to the regularization of ill-posed inverse problems. The proposed regularization involves an infimal convolution of a continuously parametrized family of convex, positively one-homogeneous functionals defined on a common Banach space X. We show that, under mild assumptions, this functional admits an equivalent convex lifting in the space of measures with values in X. This reformulation allows us to prove well-posedness of a Tikhonov regularized inverse problem and opens the door to a sparse analysis of the solutions. In the case of finite-dimensional measurements we prove a representer theorem, showing that there exists a solution of the inverse problem that is sparse, in the sense that it can be represented as a linear combination of the extremal points of the ball of the lifted infinite infimal convolution functional. Then, we design a generalized conditional gradient method for computing solutions of the inverse problem without relying on an a priori discretization of the parameter space and of the Banach space X. The iterates are constructed as linear combinations of the extremal points of the lifted infinite infimal convolution functional. We prove a sublinear rate of convergence for our algorithm and apply it to denoising of signals and images using, as regularizer, infinite infimal convolutions of fractional-Laplacian-type operators with adaptive orders of smoothness and anisotropies

    Streaking artifact suppression of quantitative susceptibility mapping reconstructions via L1-norm data fidelity optimization (L1-QSM)

    Get PDF
    Purpose: The presence of dipole-inconsistent data due to substantial noise or artifacts causes streaking artifacts in quantitative susceptibility mapping (QSM) reconstructions. Often used Bayesian approaches rely on regularizers, which in turn yield reduced sharpness. To overcome this problem, we present a novel L1-norm data fidelity approach that is robust with respect to outliers, and therefore prevents streaking artifacts. Methods: QSM functionals are solved with linear and nonlinear L1-norm data fidelity terms using functional augmentation, and are compared with equivalent L2-norm methods. Algorithms were tested on synthetic data, with phase inconsistencies added to mimic lesions, QSM Challenge 2.0 data, and in vivo brain images with hemorrhages. Results: The nonlinear L1-norm-based approach achieved the best overall error metric scores and better streaking artifact suppression. Notably, L1-norm methods could reconstruct QSM images without using a brain mask, with similar regularization weights for different data fidelity weighting or masking setups. Conclusion: The proposed L1-approach provides a robust method to prevent streaking artifacts generated by dipole-inconsistent data, renders brain mask calculation unessential, and opens novel challenging clinical applications such asassessing brain hemorrhages and cortical layers

    Application of Market Models to Network Equilibrium Problems

    Full text link
    We present a general two-side market model with divisible commodities and price functions of participants. A general existence result on unbounded sets is obtained from its variational inequality re-formulation. We describe an extension of the network flow equilibrium problem with elastic demands and a new equilibrium type model for resource allocation problems in wireless communication networks, which appear to be particular cases of the general market model. This enables us to obtain new existence results for these models as some adjustments of that for the market model. Under certain additional conditions the general market model can be reduced to a decomposable optimization problem where the goal function is the sum of two functions and one of them is convex separable, whereas the feasible set is the corresponding Cartesian product. We discuss some versions of the partial linearization method, which can be applied to these network equilibrium problems.Comment: 18 pages, 3 table

    Greedy Solution of Ill-Posed Problems: Error Bounds and Exact Inversion

    Full text link
    The orthogonal matching pursuit (OMP) is an algorithm to solve sparse approximation problems. Sufficient conditions for exact recovery are known with and without noise. In this paper we investigate the applicability of the OMP for the solution of ill-posed inverse problems in general and in particular for two deconvolution examples from mass spectrometry and digital holography respectively. In sparse approximation problems one often has to deal with the problem of redundancy of a dictionary, i.e. the atoms are not linearly independent. However, one expects them to be approximatively orthogonal and this is quantified by the so-called incoherence. This idea cannot be transfered to ill-posed inverse problems since here the atoms are typically far from orthogonal: The ill-posedness of the operator causes that the correlation of two distinct atoms probably gets huge, i.e. that two atoms can look much alike. Therefore one needs conditions which take the structure of the problem into account and work without the concept of coherence. In this paper we develop results for exact recovery of the support of noisy signals. In the two examples in mass spectrometry and digital holography we show that our results lead to practically relevant estimates such that one may check a priori if the experimental setup guarantees exact deconvolution with OMP. Especially in the example from digital holography our analysis may be regarded as a first step to calculate the resolution power of droplet holography

    A combined first and second order variational approach for image reconstruction

    Full text link
    In this paper we study a variational problem in the space of functions of bounded Hessian. Our model constitutes a straightforward higher-order extension of the well known ROF functional (total variation minimisation) to which we add a non-smooth second order regulariser. It combines convex functions of the total variation and the total variation of the first derivatives. In what follows, we prove existence and uniqueness of minimisers of the combined model and present the numerical solution of the corresponding discretised problem by employing the split Bregman method. The paper is furnished with applications of our model to image denoising, deblurring as well as image inpainting. The obtained numerical results are compared with results obtained from total generalised variation (TGV), infimal convolution and Euler's elastica, three other state of the art higher-order models. The numerical discussion confirms that the proposed higher-order model competes with models of its kind in avoiding the creation of undesirable artifacts and blocky-like structures in the reconstructed images -- a known disadvantage of the ROF model -- while being simple and efficiently numerically solvable.Comment: 34 pages, 89 figure

    Optimal Convergence Rates for Tikhonov Regularization in Besov Scales

    Full text link
    In this paper we deal with linear inverse problems and convergence rates for Tikhonov regularization. We consider regularization in a scale of Banach spaces, namely the scale of Besov spaces. We show that regularization in Banach scales differs from regularization in Hilbert scales in the sense that it is possible that stronger source conditions may lead to weaker convergence rates and vive versa. Moreover, we present optimal source conditions for regularization in Besov scales

    Bilevel Parameter Learning for Higher-Order Total Variation Regularisation Models.

    Get PDF
    We consider a bilevel optimisation approach for parameter learning in higher-order total variation image reconstruction models. Apart from the least squares cost functional, naturally used in bilevel learning, we propose and analyse an alternative cost based on a Huber-regularised TV seminorm. Differentiability properties of the solution operator are verified and a first-order optimality system is derived. Based on the adjoint information, a combined quasi-Newton/semismooth Newton algorithm is proposed for the numerical solution of the bilevel problems. Numerical experiments are carried out to show the suitability of our approach and the improved performance of the new cost functional. Thanks to the bilevel optimisation framework, also a detailed comparison between TGV 2 and ICTV is carried out, showing the advantages and shortcomings of both regularisers, depending on the structure of the processed images and their noise level.King Abdullah University of Science and Technology (KAUST) (Grant ID: KUKI1-007-43), Engineering and Physical Sciences Research Council (Grant IDs: Nr. EP/J009539/1 “Sparse & Higher-order Image Restoration” and Nr. EP/M00483X/1 “Efficient computational tools for inverse imaging problems”), Escuela Politécnica Nacional de Quito (Grant ID: PIS 12-14, MATHAmSud project SOCDE “Sparse Optimal Control of Differential Equations”), Leverhulme Trust (project on “Breaking the non-convexity barrier”), SENESCYT (Ecuadorian Ministry of Higher Education, Science, Technology and Innovation) (Prometeo Fellowship)This is the final version of the article. It first appeared from Springer via http://dx.doi.org/10.1007/s10851-016-0662-

    Total Directional Variation for Video Denoising

    Get PDF
    In this paper, we propose a variational approach for video denoising, based on a total directional variation (TDV) regulariser proposed in Parisotto et al. (2018), for image denoising and interpolation. In the TDV regulariser, the underlying image structure is encoded by means of weighted derivatives so as to enhance the anisotropic structures in images, e.g. stripes or curves with a dominant local directionality. For the extension of TDV to video denoising, the space-time structure is captured by the volumetric structure tensor guiding the smoothing process. We discuss this and present our whole video denoising work-flow. Our numerical results are compared with some state-of-the-art video denoising methods.SP acknowledges UK EPSRC grant EP/L016516/1 for the CCA DTC. CBS acknowledges support from Leverhulme Trust project on Breaking the non-convexity barrier, EPSRC grant Nr. EP/M00483X/1, the EPSRC Centre EP/N014588/1, the RISE projects CHiPS and NoMADS, the CCIMI and the Alan Turing Institute
    • …
    corecore