5,302 research outputs found

    Adaptive mesh reconstruction: Total Variation Bound

    Full text link
    We consider 3-point numerical schemes for scalar Conservation Laws, that are oscillatory either to their dispersive or anti-diffusive nature. Oscillations are responsible for the increase of the Total Variation (TV); a bound on which is crucial for the stability of the numerical scheme. It has been noticed (\cite{Arvanitis.2001}, \cite{Arvanitis.2004}, \cite{Sfakianakis.2008}) that the use of non-uniform adaptively redefined meshes, that take into account the geometry of the numerical solution itself, is capable of taming oscillations; hence improving the stability properties of the numerical schemes. In this work we provide a model for studying the evolution of the extremes over non-uniform adaptively redefined meshes. Based on this model we prove that proper mesh reconstruction is able to control the oscillations; we provide bounds for the Total Variation (TV) of the numerical solution. We moreover prove under more strict assumptions that the increase of the TV -due to the oscillatory behaviour of the numerical schemes- decreases with time; hence proving that the overall scheme is TV Increase-Decreasing (TVI-D)

    Dynamic sampling schemes for optimal noise learning under multiple nonsmooth constraints

    Full text link
    We consider the bilevel optimisation approach proposed by De Los Reyes, Sch\"onlieb (2013) for learning the optimal parameters in a Total Variation (TV) denoising model featuring for multiple noise distributions. In applications, the use of databases (dictionaries) allows an accurate estimation of the parameters, but reflects in high computational costs due to the size of the databases and to the nonsmooth nature of the PDE constraints. To overcome this computational barrier we propose an optimisation algorithm that by sampling dynamically from the set of constraints and using a quasi-Newton method, solves the problem accurately and in an efficient way

    Image Restoration using Total Variation Regularized Deep Image Prior

    Full text link
    In the past decade, sparsity-driven regularization has led to significant improvements in image reconstruction. Traditional regularizers, such as total variation (TV), rely on analytical models of sparsity. However, increasingly the field is moving towards trainable models, inspired from deep learning. Deep image prior (DIP) is a recent regularization framework that uses a convolutional neural network (CNN) architecture without data-driven training. This paper extends the DIP framework by combining it with the traditional TV regularization. We show that the inclusion of TV leads to considerable performance gains when tested on several traditional restoration tasks such as image denoising and deblurring

    A Second Order TV-type Approach for Inpainting and Denoising Higher Dimensional Combined Cyclic and Vector Space Data

    Full text link
    In this paper we consider denoising and inpainting problems for higher dimensional combined cyclic and linear space valued data. These kind of data appear when dealing with nonlinear color spaces such as HSV, and they can be obtained by changing the space domain of, e.g., an optical flow field to polar coordinates. For such nonlinear data spaces, we develop algorithms for the solution of the corresponding second order total variation (TV) type problems for denoising, inpainting as well as the combination of both. We provide a convergence analysis and we apply the algorithms to concrete problems.Comment: revised submitted versio
    • …
    corecore