38 research outputs found

    An Iterative Shrinkage Approach to Total-Variation Image Restoration

    Full text link
    The problem of restoration of digital images from their degraded measurements plays a central role in a multitude of practically important applications. A particularly challenging instance of this problem occurs in the case when the degradation phenomenon is modeled by an ill-conditioned operator. In such a case, the presence of noise makes it impossible to recover a valuable approximation of the image of interest without using some a priori information about its properties. Such a priori information is essential for image restoration, rendering it stable and robust to noise. Particularly, if the original image is known to be a piecewise smooth function, one of the standard priors used in this case is defined by the Rudin-Osher-Fatemi model, which results in total variation (TV) based image restoration. The current arsenal of algorithms for TV-based image restoration is vast. In the present paper, a different approach to the solution of the problem is proposed based on the method of iterative shrinkage (aka iterated thresholding). In the proposed method, the TV-based image restoration is performed through a recursive application of two simple procedures, viz. linear filtering and soft thresholding. Therefore, the method can be identified as belonging to the group of first-order algorithms which are efficient in dealing with images of relatively large sizes. Another valuable feature of the proposed method consists in its working directly with the TV functional, rather then with its smoothed versions. Moreover, the method provides a single solution for both isotropic and anisotropic definitions of the TV functional, thereby establishing a useful connection between the two formulae.Comment: The paper was submitted to the IEEE Transactions on Image Processing on October 22nd, 200

    Adaptive Wavelet Methods for Inverse Problems: Acceleration Strategies, Adaptive Rothe Method and Generalized Tensor Wavelets

    Get PDF
    In general, inverse problems can be described as the task of inferring conclusions about the cause u from given observations y of its effect. This can be described as the inversion of an operator equation K(u) = y, which is assumed to be ill-posed or ill-conditioned. To arrive at a meaningful solution in this setting, regularization schemes need to be applied. One of the most important regularization methods is the so called Tikhonov regularization. As an approximation to the unknown truth u it is possible to consider the minimizer v of the sum of the data error K(v)-y (in a certain norm) and a weighted penalty term F(v). The development of efficient schemes for the computation of the minimizers is a field of ongoing research and a central Task in this thesis. Most computation schemes for v are based on some generalized gradient descent approach. For problems with weighted lp-norm penalty terms this typically leads to iterated soft shrinkage methods. Without additional assumptions the convergence of these iterations is only guaranteed for subsequences, and even then only to stationary points. In general, stationary points of the minimization problem do not have any regularization properties. Also, the basic iterated soft shrinkage algorithm is known to converge very poorly in practice. This is critical as each iteration step includes the application of the nonlinear operator K and the adjoint of its derivative. This in itself may already be numerically demanding. This thesis is concerned with the development of strategies for the fast computation of the solution of inverse problems with provable convergence rates. In particular, the application and generalization of efficient numerical schemes for the treatment of the arising nonlinear operator equations is considered. The first result of this thesis is a general acceleration strategy for the iterated soft thresholding iteration to compute the solution of the inverse problem. It is based on a decreasing strategy for the weights of the penalty term. The new method converges with linear rate to a global minimizer. A very important class of inverse problems are parameter identification problems for partial differential equations. As a prototype for this class of problems the identification of parameters in a specific parabolic partial differential equation is investigated. The arising operators are analyzed, the applicability of Tikhonov Regularization is proven and the parameters in a simplified test equation are reconstructed. The parabolic differential equations are solved by means of the so called horizontal method of lines, also known as Rothes method. Here the parabolic problem is interpreted as an abstract Cauchy problem. It is discretized in time by means of an implicit scheme. This is combined with a discretization of the resulting system of spatial problems. In this thesis the application of adaptive discretization schemes to solve the spatial subproblems is investigated. Such methods realize highly nonuniform discretizations. Therefore, they tend to require much less degrees of freedom than classical discretization schemes. To ensure the convergence of the resulting inexact Rothe method, a rigorous convergence proof is given. In particular, the application of implementable asymptotically optimal adaptive methods, based on wavelet bases, is considered. An upper bound for the degrees of freedom of the overall scheme that are needed to adaptively approximate the solution up to a prescribed tolerance is derived. As an important case study, the complexity of the approximate solution of the heat equation is investigated. To this end a regularity result for the spatial equations that arise in the Rothe method is proven. The rate of convergence of asymptotically optimal adaptive methods deteriorates with the spatial dimension of the problem. This is often called the curse of dimensionality. One way to avoid this problem is to consider tensor wavelet discretizations. Such discretizations lead to dimension independent convergence rates. However, the classical tensor wavelet construction is limited to domains with simple product geometry. Therefor, in this thesis, a generalized tensor wavelet basis is constructed. It spans a range of Sobolev spaces over a domain with a fairly general geometry. The construction is based on the application of extension operators to appropriate local bases on subdomains that form a non-overlapping domain decomposition. The best m-term approximation of functions with the new generalized tensor product basis converges with a rate that is independent of the spatial dimension of the domain. For two- and three-dimensional polytopes it is shown that the solution of Poisson type problems satisfies the required regularity condition. Numerical tests show that the dimension independent rate is indeed realized in practice

    The Sixth Copper Mountain Conference on Multigrid Methods, part 1

    Get PDF
    The Sixth Copper Mountain Conference on Multigrid Methods was held on 4-9 Apr. 1993, at Copper Mountain, CO. This book is a collection of many of the papers presented at the conference and as such represents the conference proceedings. NASA LaRC graciously provided printing of this document so that all of the papers could be presented in a single forum. Each paper was reviewed by a member of the conference organizing committee under the coordination of the editors. The multigrid discipline continues to expand and mature, as is evident from these proceedings. The vibrancy in this field is amply expressed in these important papers, and the collection clearly shows its rapid trend to further diversity and depth

    Astmeliste plaatide optimiseerimine siledate voolavuspindade korral

    Get PDF
    Käesolevas väitekirjas vaadeldakse Misese, Hilli ning Tsai-Wu materjalist valmistatud elastsete plastsete astmeliste plaatide optimiseerimisega seotud küsimusi. Antud dissertatsioon põhineb autori seitsmel teaduslikul publikatsioonil, millest kuus on avaldatud viimase kolme aasta jooksul. Käesolev dissertatsioon koosneb neljast peatükist, kirjanduse loetelust ning autori elulookirjeldusest. Esimene peatükk on sisuliselt ülevaade numbriliste meetodite rakendamisest konstruktsioonielementide optimiseerimisel. Selles peatükis antakse ülevaade plaatide ja koorikute optimiseerimisele pühendatud töödest, samuti kirjeldatakse lõplike elementide meetodi ja paralleelarvutuse ajaloolist arengut. Käesoleva uurimise raames on kasutatud lõplike elementide meetodit ning Haari lainikute meetodit harilike ja osatuletistega diferentsiaalvõrrandite lahendamiseks ning on rakendatud kõrgproduktiivse ja paralleelarvutuse põhimõtteid. Teises peatükis vaadeldakse sandwich-tüüpi sümmeetrilise elastse-plastse ümarplaadi painet ühtlaselt jaotatud koormuse mõjul ning otsitakse miinimumkaaluga projekti ette antud maksimumläbipainde korral. Eeldatakse, et plaadi materjal vastab Misese voolavustingimusele. Optimaalse lahendi leidmiseks on kasutatud lõplike elementide meetodit. Kolmandas peatükis uuritakse eelmises peatükis püstitatud probleeme sümmeetriliste elastsete-plastsete astmeliste rõngasplaatide puhul. Optimaalse lahendi leidmiseks on kasutatud lõplike elementide meetodit ning Haari lainikute meetodit, viimast kasutatakse ka harilike diferentsiaalvõrrandite lahendamiseks. Neljandas peatükis on uuritud anisotroopsete rõngasplaatide painet ning on leitud miinimumkaaluga projektid Hilli ja Tsai-Wu voolavustingimuste puhul. Arvutamisel on kasutatud Haari lainikute meetodit. Väitekirjas on välja töötatud paralleelarvutuse metoodika, mis annab võimaluse numbriliselt lahendada elastsete-plastsete plaatide optimiseerimisprobleeme. Saadud lahendeid on võrreldud Ohashi ja Murakami, Turvey ning Upadrasta tulemustega. Töös saadud tulemused on heas kooskõlas teiste autorite töödega. Uurimistöö käigus ilmnes, et optimiseerimisülesannete puhul on mõistlikum kasutada lainikute meetodit, mille paralleeliseerimine hoiab rohkem kokku arvuti ressurssi.The current work is devoted to the theory of analysis and optimization of stepped circular and annular plates subject to smooth yield surfaces. Chapter 1 provides the brief historical review of the problem and of the finite element method. The Basic ideas of parallel computation, also of the multigrid method are presented herein, as well. In Chapter 2 a method for numerical investigation of axisymmetric plates subjected to the distributed transverse pressure loading was presented. The material of plates studied herein is assumed to be an ideal elastic plastic material obeying the non-linear yield condition of von Mises and the associated flow law. The strain hardening as well as geometrical non-linearity are neglected in the present investigation. Calculations carried out showed that the obtained results are in good correlation with those obtained by ABAQUS when solving the direct problem of determination of the stress strain state of the plate. In Chapter 3 an analytical-numerical study of annular plates operating in the range of elastic plastic deformations was undertaken. The material of plates was assumed to be an ideal elastic plastic material obeying the Mises yield condition. The author succeeded in the analytical derivation of optimality conditions for this highly non-linear problem. The obtained systems of equations were solved by existing computer codes. In Chapter 4 the methods of analysis and optimization of plates with piece wise constant thicknesses developed earlier for homogeneous isotropic materials are extended to plates made of anisotropic materials. The plastic yielding of the material is assumed to take place according to the criterion Tsai-Wu and the associated gradientality law. The traditional bending theory is used, non-linear effects are neglected in the current study

    Fast methods for extraction and sparsification of substrate coupling

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 107-111).Substrate coupling effects have had an increasing impact on circuit performance in recent years. As a result, there is strong demand for substrate simulation tools. Past work has concentrated on fast substrate solvers that are applied once per contact to get the dense conductance matrix G. We develop a method of using any underlying substrate solver a near-constant number of times to obtain a sparse approximate representation G [approximately equal to] QGwtQ' in a new basis. This method differs from previous matrix sparsification techniques in that it requires only a "black box" which can apply G quickly; it doesn't need an analytical representation of the underlying kernel or access to individual entries of G. The change-of-basis matrix Q is also sparse. For our largest example, with 10240 contacts, we obtained a Gwt with 130 times fewer nonzeros than the dense G (and Q more than twice as sparse as Gwt), with 20 times fewer solves than the naive method, and fewer than 4 percent of the QGwtQ' entries had relative error more than 10% compared to the exact G.by Joseph Daniel Kanapka.Ph.D

    ISCR Annual Report: Fical Year 2004

    Full text link

    Inference, Computation, and Games

    Get PDF
    In this thesis, we use statistical inference and competitive games to design algorithms for computational mathematics. In the first part, comprising chapters two through six, we use ideas from Gaussian process statistics to obtain fast solvers for differential and integral equations. We begin by observing the equivalence of conditional (near-)independence of Gaussian processes and the (near-)sparsity of the Cholesky factors of its precision and covariance matrices. This implies the existence of a large class of dense matrices with almost sparse Cholesky factors, thereby greatly increasing the scope of application of sparse Cholesky factorization. Using an elimination ordering and sparsity pattern motivated by the screening effect in spatial statistics, we can compute approximate Cholesky factors of the covariance matrices of Gaussian processes admitting a screening effect in near-linear computational complexity. These include many popular smoothness priors such as the Matérn class of covariance functions. In the special case of Green's matrices of elliptic boundary value problems (with possibly unknown elliptic operators of arbitrarily high order, with possibly rough coefficients), we can use tools from numerical homogenization to prove the exponential accuracy of our method. This result improves the state-of-the-art for solving general elliptic integral equations and provides the first proof of an exponential screening effect. We also derive a fast solver for elliptic partial differential equations, with accuracy-vs-complexity guarantees that improve upon the state-of-the-art. Furthermore, the resulting solver is performant in practice, frequently beating established algebraic multigrid libraries such as AMGCL and Trilinos on a series of challenging problems in two and three dimensions. Finally, for any given covariance matrix, we obtain a closed-form expression for its optimal (in terms of Kullback-Leibler divergence) approximate inverse-Cholesky factorization subject to a sparsity constraint, recovering Vecchia approximation and factorized sparse approximate inverses. Our method is highly robust, embarrassingly parallel, and further improves our asymptotic results on the solution of elliptic integral equations. We also provide a way to apply our techniques to sums of independent Gaussian processes, resolving a major limitation of existing methods based on the screening effect. As a result, we obtain fast algorithms for large-scale Gaussian process regression problems with possibly noisy measurements. In the second part of this thesis, comprising chapters seven through nine, we study continuous optimization through the lens of competitive games. In particular, we consider competitive optimization, where multiple agents attempt to minimize conflicting objectives. In the single-agent case, the updates of gradient descent are minimizers of quadratically regularized linearizations of the loss function. We propose to generalize this idea by using the Nash equilibria of quadratically regularized linearizations of the competitive game as updates (linearize the game). We provide fundamental reasons why the natural notion of linearization for competitive optimization problems is given by the multilinear (as opposed to linear) approximation of the agents' loss functions. The resulting algorithm, which we call competitive gradient descent, thus provides a natural generalization of gradient descent to competitive optimization. By using ideas from information geometry, we extend CGD to competitive mirror descent (CMD) that can be applied to a vast range of constrained competitive optimization problems. CGD and CMD resolve the cycling problem of simultaneous gradient descent and show promising results on problems arising in constrained optimization, robust control theory, and generative adversarial networks. Finally, we point out the GAN-dilemma that refutes the common interpretation of GANs as approximate minimizers of a divergence obtained in the limit of a fully trained discriminator. Instead, we argue that GAN performance relies on the implicit competitive regularization (ICR) due to the simultaneous optimization of generator and discriminator and support this hypothesis with results on low-dimensional model problems and GANs on CIFAR10.</p

    A robust multigrid approach for variational image registration models

    Get PDF
    AbstractVariational registration models are non-rigid and deformable imaging techniques for accurate registration of two images. As with other models for inverse problems using the Tikhonov regularization, they must have a suitably chosen regularization term as well as a data fitting term. One distinct feature of registration models is that their fitting term is always highly nonlinear and this nonlinearity restricts the class of numerical methods that are applicable. This paper first reviews the current state-of-the-art numerical methods for such models and observes that the nonlinear fitting term is mostly ‘avoided’ in developing fast multigrid methods. It then proposes a unified approach for designing fixed point type smoothers for multigrid methods. The diffusion registration model (second-order equations) and a curvature model (fourth-order equations) are used to illustrate our robust methodology. Analysis of the proposed smoothers and comparisons to other methods are given. As expected of a multigrid method, being many orders of magnitude faster than the unilevel gradient descent approach, the proposed numerical approach delivers fast and accurate results for a range of synthetic and real test images
    corecore