176 research outputs found

    Solving a variational image restoration model which involves L∞ constraints

    Get PDF
    In this paper, we seek a solution to linear inverse problems arising in image restoration in terms of a recently posed optimization problem which combines total variation minimization and wavelet-thresholding ideas. The resulting nonlinear programming task is solved via a dual Uzawa method in its general form, leading to an efficient and general algorithm which allows for very good structure-preserving reconstructions. Along with a theoretical study of the algorithm, the paper details some aspects of the implementation, discusses the numerical convergence and eventually displays a few images obtained for some difficult restoration tasks

    Limiting accuracy of segregated solution methods for nonsymmetric saddle point problems

    Get PDF
    AbstractNonsymmetric saddle point problems arise in a wide variety of applications in computational science and engineering. The aim of this paper is to discuss the numerical behavior of several nonsymmetric iterative methods applied for solving the saddle point systems via the Schur complement reduction or the null-space projection approach. Krylov subspace methods often produce the iterates which fluctuate rather strongly. Here we address the question whether large intermediate approximate solutions reduce the final accuracy of these two-level (inner–outer) iteration algorithms. We extend our previous analysis obtained for symmetric saddle point problems and distinguish between three mathematically equivalent back-substitution schemes which lead to a different numerical behavior when applied in finite precision arithmetic. Theoretical results are then illustrated on a simple model example

    An alternating positive semidefinite splitting preconditioner for the three-by-three block saddle point problems

    Get PDF
    Using the idea of dimensional splitting method we present an iteration method for solving three-by-three block saddle point problems which can appear in linear programming and finite element discretization of the Maxwell equation. We prove that the method is convergent unconditionally. Then the induced preconditioner is used to accelerate the convergence of the GMRES method for solving the system. Numerical results are presented to compare the performance of the method with some existing ones

    An alternating positive semidefinite splitting preconditioner for the three-by-three block saddle point problems

    Get PDF
    Using the idea of dimensional splitting method we present an iteration method for solving three-by-three block saddle point problems which can appear in linear programming and finite element discretization of the Maxwell equation. We prove that the method is convergent unconditionally. Then the induced preconditioner is used to accelerate the convergence of the GMRES method for solving the system. Numerical results are presented to compare the performance of the method with some existing ones

    An Augmented Lagrangian Method for TVg + L1-norm Minimization

    Get PDF
    International audienceIn this paper, the minimization of a weighted total variation regularization term with L1 norm as the data fidelity term is addressed using Uzawa block relaxation methods. The unconstrained minimization problem is transformed into a saddle-point problem by introducing a suitable auxiliary unknown. Applying a Uzawa block relaxation method to the corresponding augmented Lagrangian functional, we obtain a new numerical algorithm in which the main unknown is computed using Chambolle projection algorithm. The auxiliary unknown is computed explicitly. Numerical experiments show the availability of our algorithm for salt and pepper noise removal or shape retrieval and also its robustness against the choice of the penalty parameter. This last property allows us to attain the convergence in a reduced number of iterations leading to efficient numerical schemes. Moreover, we highlight the fact that an appropriate weighted total variation term, chosen according to the properties of the initial image, may provide not only a significant improvement of the results but also a geometric filtering of the image components

    Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization

    Full text link
    In this work, we revisit a classical incremental implementation of the primal-descent dual-ascent gradient method used for the solution of equality constrained optimization problems. We provide a short proof that establishes the linear (exponential) convergence of the algorithm for smooth strongly-convex cost functions and study its relation to the non-incremental implementation. We also study the effect of the augmented Lagrangian penalty term on the performance of distributed optimization algorithms for the minimization of aggregate cost functions over multi-agent networks
    corecore