10,168 research outputs found

    Optimization with Sparsity-Inducing Penalties

    Get PDF
    Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted â„“2\ell_2-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view

    On the Singular Neumann Problem in Linear Elasticity

    Full text link
    The Neumann problem of linear elasticity is singular with a kernel formed by the rigid motions of the body. There are several tricks that are commonly used to obtain a non-singular linear system. However, they often cause reduced accuracy or lead to poor convergence of the iterative solvers. In this paper, different well-posed formulations of the problem are studied through discretization by the finite element method, and preconditioning strategies based on operator preconditioning are discussed. For each formulation we derive preconditioners that are independent of the discretization parameter. Preconditioners that are robust with respect to the first Lam\'e constant are constructed for the pure displacement formulations, while a preconditioner that is robust in both Lam\'e constants is constructed for the mixed formulation. It is shown that, for convergence in the first Sobolev norm, it is crucial to respect the orthogonality constraint derived from the continuous problem. Based on this observation a modification to the conjugate gradient method is proposed that achieves optimal error convergence of the computed solution

    Some Preconditioning Techniques for Saddle Point Problems

    Get PDF
    Saddle point problems arise frequently in many applications in science and engineering, including constrained optimization, mixed finite element formulations of partial differential equations, circuit analysis, and so forth. Indeed the formulation of most problems with constraints gives rise to saddle point systems. This paper provides a concise overview of iterative approaches for the solution of such systems which are of particular importance in the context of large scale computation. In particular we describe some of the most useful preconditioning techniques for Krylov subspace solvers applied to saddle point problems, including block and constrained preconditioners.\ud \ud The work of Michele Benzi was supported in part by the National Science Foundation grant DMS-0511336
    • …
    corecore