19 research outputs found

    Consistent Dynamic Mode Decomposition

    Full text link
    We propose a new method for computing Dynamic Mode Decomposition (DMD) evolution matrices, which we use to analyze dynamical systems. Unlike the majority of existing methods, our approach is based on a variational formulation consisting of data alignment penalty terms and constitutive orthogonality constraints. Our method does not make any assumptions on the structure of the data or their size, and thus it is applicable to a wide range of problems including non-linear scenarios or extremely small observation sets. In addition, our technique is robust to noise that is independent of the dynamics and it does not require input data to be sequential. Our key idea is to introduce a regularization term for the forward and backward dynamics. The obtained minimization problem is solved efficiently using the Alternating Method of Multipliers (ADMM) which requires two Sylvester equation solves per iteration. Our numerical scheme converges empirically and is similar to a provably convergent ADMM scheme. We compare our approach to various state-of-the-art methods on several benchmark dynamical systems

    Exterior-point Optimization for Nonconvex Learning

    Full text link
    In this paper we present the nonconvex exterior-point optimization solver (NExOS) -- a novel first-order algorithm tailored to constrained nonconvex learning problems. We consider the problem of minimizing a convex function over nonconvex constraints, where the projection onto the constraint set is single-valued around local minima. A wide range of nonconvex learning problems have this structure including (but not limited to) sparse and low-rank optimization problems. By exploiting the underlying geometry of the constraint set, NExOS finds a locally optimal point by solving a sequence of penalized problems with strictly decreasing penalty parameters. NExOS solves each penalized problem by applying a first-order algorithm, which converges linearly to a local minimum of the corresponding penalized formulation under regularity conditions. Furthermore, the local minima of the penalized problems converge to a local minimum of the original problem as the penalty parameter goes to zero. We implement NExOS in the open-source Julia package NExOS.jl, which has been extensively tested on many instances from a wide variety of learning problems. We demonstrate that our algorithm, in spite of being general purpose, outperforms specialized methods on several examples of well-known nonconvex learning problems involving sparse and low-rank optimization. For sparse regression problems, NExOS finds locally optimal solutions which dominate glmnet in terms of support recovery, yet its training loss is smaller by an order of magnitude. For low-rank optimization with real-world data, NExOS recovers solutions with 3 fold training loss reduction, but with a proportion of explained variance that is 2 times better compared to the nuclear norm heuristic.Comment: 40 pages, 6 figure

    On the Iteration Complexity of Smoothed Proximal ALM for Nonconvex Optimization Problem with Convex Constraints

    Full text link
    It is well-known that the lower bound of iteration complexity for solving nonconvex unconstrained optimization problems is Ω(1/ϵ2)\Omega(1/\epsilon^2), which can be achieved by standard gradient descent algorithm when the objective function is smooth. This lower bound still holds for nonconvex constrained problems, while it is still unknown whether a first-order method can achieve this lower bound. In this paper, we show that a simple single-loop first-order algorithm called smoothed proximal augmented Lagrangian method (ALM) can achieve such iteration complexity lower bound. The key technical contribution is a strong local error bound for a general convex constrained problem, which is of independent interest
    corecore