64 research outputs found

    Speckle noise removal convex method using higher-order curvature variation

    Get PDF

    Variational Domain Decomposition For Parallel Image Processing

    Full text link
    Many important techniques in image processing rely on partial differential equation (PDE) problems, which exhibit spatial couplings between the unknowns throughout the whole image plane. Therefore, a straightforward spatial splitting into independent subproblems and subsequent parallel solving aimed at diminishing the total computation time does not lead to the solution of the original problem. Typically, significant errors at the local boundaries between the subproblems occur. For that reason, most of the PDE-based image processing algorithms are not directly amenable to coarse-grained parallel computing, but only to fine-grained parallelism, e.g. on the level of the particular arithmetic operations involved with the specific solving procedure. In contrast, Domain Decomposition (DD) methods provide several different approaches to decompose PDE problems spatially so that the merged local solutions converge to the original, global one. Thus, such methods distinguish between the two main classes of overlapping and non-overlapping methods, referring to the overlap between the adjacent subdomains on which the local problems are defined. Furthermore, the classical DD methods --- studied intensively in the past thirty years --- are primarily applied to linear PDE problems, whereas some of the current important image processing approaches involve solving of nonlinear problems, e.g. Total Variation (TV)-based approaches. Among the linear DD methods, non-overlapping methods are favored, since in general they require significanty fewer data exchanges between the particular processing nodes during the parallel computation and therefore reach a higher scalability. For that reason, the theoretical and empirical focus of this work lies primarily on non-overlapping methods, whereas for the overlapping methods we mainly stay with presenting the most important algorithms. With the linear non-overlapping DD methods, we first concentrate on the theoretical foundation, which serves as basis for gradually deriving the different algorithms thereafter. Although we make a connection between the very early methods on two subdomains and the current two-level methods on arbitrary numbers of subdomains, the experimental studies focus on two prototypical methods being applied to the model problem of estimating the optic flow, at which point different numerical aspects, such as the influence of the number of subdomains on the convergence rate, are explored. In particular, we present results of experiments conducted on a PC-cluster (a distributed memory parallel computer based on low-cost PC hardware for up to 144 processing nodes) which show a very good scalability of non-overlapping DD methods. With respect to nonlinear non-overlapping DD methods, we pursue two distinct approaches, both applied to nonlinear, PDE-based image denoising. The first approach draws upon the theory of optimal control, and has been successfully employed for the domain decomposition of Navier-Stokes equations. The second nonlinear DD approach, on the other hand, relies on convex programming and relies on the decomposition of the corresponding minimization problems. Besides the main subject of parallelization by DD methods, we also investigate the linear model problem of motion estimation itself, namely by proposing and empirically studying a new variational approach for the estimation of turbulent flows in the area of fluid mechanics

    An enhanced high-order variational model based on speckle noise removal with G<sup>0 </sup>distribution

    Get PDF

    First order algorithms in variational image processing

    Get PDF
    Variational methods in imaging are nowadays developing towards a quite universal and flexible tool, allowing for highly successful approaches on tasks like denoising, deblurring, inpainting, segmentation, super-resolution, disparity, and optical flow estimation. The overall structure of such approaches is of the form D(Ku)+αR(u)minu{\cal D}(Ku) + \alpha {\cal R} (u) \rightarrow \min_u ; where the functional D{\cal D} is a data fidelity term also depending on some input data ff and measuring the deviation of KuKu from such and R{\cal R} is a regularization functional. Moreover KK is a (often linear) forward operator modeling the dependence of data on an underlying image, and α\alpha is a positive regularization parameter. While D{\cal D} is often smooth and (strictly) convex, the current practice almost exclusively uses nonsmooth regularization functionals. The majority of successful techniques is using nonsmooth and convex functionals like the total variation and generalizations thereof or 1\ell_1-norms of coefficients arising from scalar products with some frame system. The efficient solution of such variational problems in imaging demands for appropriate algorithms. Taking into account the specific structure as a sum of two very different terms to be minimized, splitting algorithms are a quite canonical choice. Consequently this field has revived the interest in techniques like operator splittings or augmented Lagrangians. Here we shall provide an overview of methods currently developed and recent results as well as some computational studies providing a comparison of different methods and also illustrating their success in applications.Comment: 60 pages, 33 figure

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore