267 research outputs found

    Peaceman-Rachford splitting for a class of nonconvex optimization problems

    Full text link
    We study the applicability of the Peaceman-Rachford (PR) splitting method for solving nonconvex optimization problems. When applied to minimizing the sum of a strongly convex Lipschitz differentiable function and a proper closed function, we show that if the strongly convex function has a large enough strong convexity modulus and the step-size parameter is chosen below a threshold that is computable, then any cluster point of the sequence generated, if exists, will give a stationary point of the optimization problem. We also give sufficient conditions guaranteeing boundedness of the sequence generated. We then discuss one way to split the objective so that the proposed method can be suitably applied to solving optimization problems with a coercive objective that is the sum of a (not necessarily strongly) convex Lipschitz differentiable function and a proper closed function; this setting covers a large class of nonconvex feasibility problems and constrained least squares problems. Finally, we illustrate the proposed algorithm numerically

    Projection Methods: Swiss Army Knives for Solving Feasibility and Best Approximation Problems with Halfspaces

    Full text link
    We model a problem motivated by road design as a feasibility problem. Projections onto the constraint sets are obtained, and projection methods for solving the feasibility problem are studied. We present results of numerical experiments which demonstrate the efficacy of projection methods even for challenging nonconvex problems

    A convergent relaxation of the Douglas-Rachford algorithm

    Full text link
    This paper proposes an algorithm for solving structured optimization problems, which covers both the backward-backward and the Douglas-Rachford algorithms as special cases, and analyzes its convergence. The set of fixed points of the algorithm is characterized in several cases. Convergence criteria of the algorithm in terms of general fixed point operators are established. When applying to nonconvex feasibility including the inconsistent case, we prove local linear convergence results under mild assumptions on regularity of individual sets and of the collection of sets which need not intersect. In this special case, we refine known linear convergence criteria for the Douglas-Rachford algorithm (DR). As a consequence, for feasibility with one of the sets being affine, we establish criteria for linear and sublinear convergence of convex combinations of the alternating projection and the DR methods. These results seem to be new. We also demonstrate the seemingly improved numerical performance of this algorithm compared to the RAAR algorithm for both consistent and inconsistent sparse feasibility problems
    corecore