1,605 research outputs found

    On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs

    Get PDF
    Soft wavelet shrinkage, total variation (TV) diffusion, total variation regularization, and a dynamical system called SIDEs are four useful techniques for discontinuity preserving denoising of signals and images. In this paper we investigate under which circumstances these methods are equivalent in the 1-D case. First we prove that Haar wavelet shrinkage on a single scale is equivalent to a single step of space-discrete TV diffusion or regularization of two-pixel pairs. In the translationally invariant case we show that applying cycle spinning to Haar wavelet shrinkage on a single scale can be regarded as an absolutely stable explicit discretization of TV diffusion. We prove that space-discrete TV difusion and TV regularization are identical, and that they are also equivalent to the SIDEs system when a specific force function is chosen. Afterwards we show that wavelet shrinkage on multiple scales can be regarded as a single step diffusion filtering or regularization of the Laplacian pyramid of the signal. We analyse possibilities to avoid Gibbs-like artifacts for multiscale Haar wavelet shrinkage by scaling the thesholds. Finally we present experiments where hybrid methods are designed that combine the advantages of wavelets and PDE / variational approaches. These methods are based on iterated shift-invariant wavelet shrinkage at multiple scales with scaled thresholds

    Integrodifferential equations for multiscale wavelet shrinkage : the discrete case

    Get PDF
    We investigate the relations between wavelet shrinkage and integrodifferential equations for image simplification and denoising in the discrete case. Previous investigations in the continuous one-dimensional setting are transferred to the discrete multidimentional case. The key observation is that a wavelet transform can be understood as derivative operator in connection with convolution with a smoothing kernel. In this paper, we extend these ideas to the practically relevant discrete formulation with both orthogonal and biorthogonal wavelets. In the discrete setting, the behaviour of the smoothing kernels for different scales is more complicated than in the continuous setting and of special interest for the understanding of the filters. With the help of tensor product wavelets and special shrinkage rules, the approach is extended to more than one spatial dimension. The results of wavelet shrinkage and related integrodifferential equations are compared in terms of quality by numerical experiments

    Speckle Noise Reduction using Local Binary Pattern

    Get PDF
    AbstractA novel local binary pattern (LBP) based adaptive diffusion for speckle noise reduction is presented. The LBP operator unifies traditionally divergent statistical and structural models of region analysis. We use LBP textons to classify an image around a pixel into noisy, homogenous, corner and edge regions. According to different types of regions, a variable weight is assigned in to the diffusion equation, so that our algorithm can adaptively encourage strong diffusion in homogenous/noisy regions and less on the edge/corner regions. The diffusion preserves edges, local details while diffusing more on homogenous region. The experiments results are evaluated both in terms of objective metric and the visual quality

    Connecting mathematical models for image processing and neural networks

    Get PDF
    This thesis deals with the connections between mathematical models for image processing and deep learning. While data-driven deep learning models such as neural networks are flexible and well performing, they are often used as a black box. This makes it hard to provide theoretical model guarantees and scientific insights. On the other hand, more traditional, model-driven approaches such as diffusion, wavelet shrinkage, and variational models offer a rich set of mathematical foundations. Our goal is to transfer these foundations to neural networks. To this end, we pursue three strategies. First, we design trainable variants of traditional models and reduce their parameter set after training to obtain transparent and adaptive models. Moreover, we investigate the architectural design of numerical solvers for partial differential equations and translate them into building blocks of popular neural network architectures. This yields criteria for stable networks and inspires novel design concepts. Lastly, we present novel hybrid models for inpainting that rely on our theoretical findings. These strategies provide three ways for combining the best of the two worlds of model- and data-driven approaches. Our work contributes to the overarching goal of closing the gap between these worlds that still exists in performance and understanding.Gegenstand dieser Arbeit sind die ZusammenhĂ€nge zwischen mathematischen Modellen zur Bildverarbeitung und Deep Learning. WĂ€hrend datengetriebene Modelle des Deep Learning wie z.B. neuronale Netze flexibel sind und gute Ergebnisse liefern, werden sie oft als Black Box eingesetzt. Das macht es schwierig, theoretische Modellgarantien zu liefern und wissenschaftliche Erkenntnisse zu gewinnen. Im Gegensatz dazu bieten traditionellere, modellgetriebene AnsĂ€tze wie Diffusion, Wavelet Shrinkage und VariationsansĂ€tze eine FĂŒlle von mathematischen Grundlagen. Unser Ziel ist es, diese auf neuronale Netze zu ĂŒbertragen. Zu diesem Zweck verfolgen wir drei Strategien. ZunĂ€chst entwerfen wir trainierbare Varianten von traditionellen Modellen und reduzieren ihren Parametersatz, um transparente und adaptive Modelle zu erhalten. Außerdem untersuchen wir die Architekturen von numerischen Lösern fĂŒr partielle Differentialgleichungen und ĂŒbersetzen sie in Bausteine von populĂ€ren neuronalen Netzwerken. Daraus ergeben sich Kriterien fĂŒr stabile Netzwerke und neue Designkonzepte. Schließlich prĂ€sentieren wir neuartige hybride Modelle fĂŒr Inpainting, die auf unseren theoretischen Erkenntnissen beruhen. Diese Strategien bieten drei Möglichkeiten, das Beste aus den beiden Welten der modell- und datengetriebenen AnsĂ€tzen zu vereinen. Diese Arbeit liefert einen Beitrag zum ĂŒbergeordneten Ziel, die LĂŒcke zwischen den zwei Welten zu schließen, die noch in Bezug auf Leistung und ModellverstĂ€ndnis besteht.ERC Advanced Grant INCOVI

    Metric based up-scaling

    Get PDF
    We consider divergence form elliptic operators in dimension n≄2n\geq 2 with L∞L^\infty coefficients. Although solutions of these operators are only H\"{o}lder continuous, we show that they are differentiable (C1,αC^{1,\alpha}) with respect to harmonic coordinates. It follows that numerical homogenization can be extended to situations where the medium has no ergodicity at small scales and is characterized by a continuum of scales by transferring a new metric in addition to traditional averaged (homogenized) quantities from subgrid scales into computational scales and error bounds can be given. This numerical homogenization method can also be used as a compression tool for differential operators.Comment: Final version. Accepted for publication in Communications on Pure and Applied Mathematics. Presented at CIMMS (March 2005), Socams 2005 (April), Oberwolfach, MPI Leipzig (May 2005), CIRM (July 2005). Higher resolution figures are available at http://www.acm.caltech.edu/~owhadi
    • 

    corecore