83 research outputs found

    Weighted Mean Curvature

    Full text link
    In image processing tasks, spatial priors are essential for robust computations, regularization, algorithmic design and Bayesian inference. In this paper, we introduce weighted mean curvature (WMC) as a novel image prior and present an efficient computation scheme for its discretization in practical image processing applications. We first demonstrate the favorable properties of WMC, such as sampling invariance, scale invariance, and contrast invariance with Gaussian noise model; and we show the relation of WMC to area regularization. We further propose an efficient computation scheme for discretized WMC, which is demonstrated herein to process over 33.2 giga-pixels/second on GPU. This scheme yields itself to a convolutional neural network representation. Finally, WMC is evaluated on synthetic and real images, showing its superiority quantitatively to total-variation and mean curvature.Comment: 12 page

    Regularized Shallow Image Prior for Electrical Impedance Tomography

    Full text link
    Untrained Neural Network Prior (UNNP) based algorithms have gained increasing popularity in tomographic imaging, as they offer superior performance compared to hand-crafted priors and do not require training. UNNP-based methods usually rely on deep architectures which are known for their excellent feature extraction ability compared to shallow ones. Contrary to common UNNP-based approaches, we propose a regularized shallow image prior method that combines UNNP with hand-crafted prior for Electrical Impedance Tomography (EIT). Our approach employs a 3-layer Multi-Layer Perceptron (MLP) as the UNNP in regularizing 2D and 3D EIT inversion. We demonstrate the influence of two typical hand-crafted regularizations when representing the conductivity distribution with shallow MLPs. We show considerably improved EIT image quality compared to conventional regularization algorithms, especially in structure preservation. The results suggest that combining the shallow image prior and the hand-crafted regularization can achieve similar performance to the Deep Image Prior (DIP) but with less architectural dependency and complexity of the neural network

    Nonlinear Spectral Geometry Processing via the TV Transform

    Full text link
    We introduce a novel computational framework for digital geometry processing, based upon the derivation of a nonlinear operator associated to the total variation functional. Such operator admits a generalized notion of spectral decomposition, yielding a sparse multiscale representation akin to Laplacian-based methods, while at the same time avoiding undesirable over-smoothing effects typical of such techniques. Our approach entails accurate, detail-preserving decomposition and manipulation of 3D shape geometry while taking an especially intuitive form: non-local semantic details are well separated into different bands, which can then be filtered and re-synthesized with a straightforward linear step. Our computational framework is flexible, can be applied to a variety of signals, and is easily adapted to different geometry representations, including triangle meshes and point clouds. We showcase our method throughout multiple applications in graphics, ranging from surface and signal denoising to detail transfer and cubic stylization.Comment: 16 pages, 20 figure

    Bilevel optimization, deep learning and fractional Laplacian regularization with applications in tomography

    Get PDF
    The article of record as published may be located at https://doi.org/10.1088/1361-6420/ab80d7Funded by Naval Postgraduate SchoolIn this work we consider a generalized bilevel optimization framework for solv- ing inverse problems. We introduce fractional Laplacian as a regularizer to improve the reconstruction quality, and compare it with the total variation regularization. We emphasize that the key advantage of using fractional Laplacian as a regularizer is that it leads to a linear operator, as opposed to the total varia- tion regularization which results in a nonlinear degenerate operator. Inspired by residual neural networks, to learn the optimal strength of regularization and the exponent of fractional Laplacian, we develop a dedicated bilevel opti- mization neural network with a variable depth for a general regularized inverse problem. We illustrate how to incorporate various regularizer choices into our proposed network. As an example, we consider tomographic reconstruction as a model problem and show an improvement in reconstruction quality, especially for limited data, via fractional Laplacian regularization. We successfully learn the regularization strength and the fractional exponent via our proposed bilevel optimization neural network. We observe that the fractional Laplacian regular- ization outperforms total variation regularization. This is specially encouraging, and important, in the case of limited and noisy data.The first and third authors are partially supported by NSF grants DMS-1818772, DMS-1913004, the Air Force Office of Scientific Research under Award No.: FA9550-19-1-0036, and the Department of Navy, Naval PostGraduate School under Award No.: N00244-20-1-0005. The third author is also partially supported by a Provost award at George Mason University under the Industrial Immersion Program. The second author is partially supported by DOE Office of Science under Contract No. DE-AC02-06CH11357.The first and third authors are partially supported by NSF grants DMS-1818772, DMS-1913004, the Air Force Office of Scientific Research under Award No.: FA9550-19-1-0036, and the Department of Navy, Naval PostGraduate School under Award No.: N00244-20-1-0005. The third author is also partially supported by a Provost award at George Mason University under the Industrial Immersion Program. The second author is partially supported by DOE Office of Science under Contract No. DE-AC02-06CH11357

    Dualization and automatic distributed parameter selection of total generalized variation via bilevel optimization

    Get PDF
    Total Generalized Variation (TGV) regularization in image reconstruction relies on an infimal convolution type combination of generalized first- and second-order derivatives. This helps to avoid the staircasing effect of Total Variation (TV) regularization, while still preserving sharp contrasts in images. The associated regularization effect crucially hinges on two parameters whose proper adjustment represents a challenging task. In this work, a bilevel optimization framework with a suitable statistics-based upper level objective is proposed in order to automatically select these parameters. The framework allows for spatially varying parameters, thus enabling better recovery in high-detail image areas. A rigorous dualization framework is established, and for the numerical solution, two Newton type methods for the solution of the lower level problem, i.e. the image reconstruction problem, and two bilevel TGV algorithms are introduced, respectively. Denoising tests confirm that automatically selected distributed regularization parameters lead in general to improved reconstructions when compared to results for scalar parameters

    Generating structured non-smooth priors and associated primal-dual methods

    Get PDF
    The purpose of the present chapter is to bind together and extend some recent developments regarding data-driven non-smooth regularization techniques in image processing through the means of a bilevel minimization scheme. The scheme, considered in function space, takes advantage of a dualization framework and it is designed to produce spatially varying regularization parameters adapted to the data for well-known regularizers, e.g. Total Variation and Total Generalized variation, leading to automated (monolithic), image reconstruction workflows. An inclusion of the theory of bilevel optimization and the theoretical background of the dualization framework, as well as a brief review of the aforementioned regularizers and their parameterization, makes this chapter a self-contained one. Aspects of the numerical implementation of the scheme are discussed and numerical examples are provided
    • …
    corecore