733 research outputs found

    Constraint interface preconditioning for topology optimization problems

    Get PDF
    The discretization of constrained nonlinear optimization problems arising in the field of topology optimization yields algebraic systems which are challenging to solve in practice, due to pathological ill-conditioning, strong nonlinearity and size. In this work we propose a methodology which brings together existing fast algorithms, namely, interior-point for the optimization problem and a novel substructuring domain decomposition method for the ensuing large-scale linear systems. The main contribution is the choice of interface preconditioner which allows for the acceleration of the domain decomposition method, leading to performance independent of problem size.Comment: To be published in SIAM J. Sci. Com

    A function space framework for structural total variation regularization with applications in inverse problems

    Get PDF
    In this work, we introduce a function space setting for a wide class of structural/weighted total variation (TV) regularization methods motivated by their applications in inverse problems. In particular, we consider a regularizer that is the appropriate lower semi-continuous envelope (relaxation) of a suitable total variation type functional initially defined for sufficiently smooth functions. We study examples where this relaxation can be expressed explicitly, and we also provide refinements for weighted total variation for a wide range of weights. Since an integral characterization of the relaxation in function space is, in general, not always available, we show that, for a rather general linear inverse problems setting, instead of the classical Tikhonov regularization problem, one can equivalently solve a saddle-point problem where no a priori knowledge of an explicit formulation of the structural TV functional is needed. In particular, motivated by concrete applications, we deduce corresponding results for linear inverse problems with norm and Poisson log-likelihood data discrepancy terms. Finally, we provide proof-of-concept numerical examples where we solve the saddle-point problem for weighted TV denoising as well as for MR guided PET image reconstruction

    Novel min-max reformulations of Linear Inverse Problems

    Full text link
    In this article, we dwell into the class of so-called ill-posed Linear Inverse Problems (LIP) which simply refers to the task of recovering the entire signal from its relatively few random linear measurements. Such problems arise in a variety of settings with applications ranging from medical image processing, recommender systems, etc. We propose a slightly generalized version of the error constrained linear inverse problem and obtain a novel and equivalent convex-concave min-max reformulation by providing an exposition to its convex geometry. Saddle points of the min-max problem are completely characterized in terms of a solution to the LIP, and vice versa. Applying simple saddle point seeking ascend-descent type algorithms to solve the min-max problems provides novel and simple algorithms to find a solution to the LIP. Moreover, the reformulation of an LIP as the min-max problem provided in this article is crucial in developing methods to solve the dictionary learning problem with almost sure recovery constraints

    Stochastic mirror descent dynamics and their convergence in monotone variational inequalities

    Get PDF
    We examine a class of stochastic mirror descent dynamics in the context of monotone variational inequalities (including Nash equilibrium and saddle-point problems). The dynamics under study are formulated as a stochastic differential equation driven by a (single-valued) monotone operator and perturbed by a Brownian motion. The system's controllable parameters are two variable weight sequences that respectively pre- and post-multiply the driver of the process. By carefully tuning these parameters, we obtain global convergence in the ergodic sense, and we estimate the average rate of convergence of the process. We also establish a large deviations principle showing that individual trajectories exhibit exponential concentration around this average.Comment: 23 pages; updated proofs in Section 3 and Section

    Non-local control in the conduction coefficients: well posedness and convergence to the local limit

    Full text link
    We consider a problem of optimal distribution of conductivities in a system governed by a non-local diffusion law. The problem stems from applications in optimal design and more specifically topology optimization. We propose a novel parametrization of non-local material properties. With this parametrization the non-local diffusion law in the limit of vanishing non-local interaction horizons converges to the famous and ubiquitously used generalized Laplacian with SIMP (Solid Isotropic Material with Penalization) material model. The optimal control problem for the limiting local model is typically ill-posed and does not attain its infimum without additional regularization. Surprisingly, its non-local counterpart attains its global minima in many practical situations, as we demonstrate in this work. In spite of this qualitatively different behaviour, we are able to partially characterize the relationship between the non-local and the local optimal control problems. We also complement our theoretical findings with numerical examples, which illustrate the viability of our approach to optimal design practitioners
    • …
    corecore