18,854 research outputs found

    A Unified Distributed Method for Constrained Networked Optimization via Saddle-Point Dynamics

    Full text link
    This paper develops a unified distributed method for solving two classes of constrained networked optimization problems, i.e., optimal consensus problem and resource allocation problem with non-identical set constraints. We first transform these two constrained networked optimization problems into a unified saddle-point problem framework with set constraints. Subsequently, two projection-based primal-dual algorithms via Optimistic Gradient Descent Ascent (OGDA) method and Extra-gradient (EG) method are developed for solving constrained saddle-point problems. It is shown that the developed algorithms achieve exact convergence to a saddle point with an ergodic convergence rate O(1/k)O(1/k) for general convex-concave functions. Based on the proposed primal-dual algorithms via saddle-point dynamics, we develop unified distributed algorithm design and convergence analysis for these two networked optimization problems. Finally, two numerical examples are presented to demonstrate the theoretical results

    Quantum Algorithm for Finding the Negative Curvature Direction

    Get PDF
    Non-convex optimization is an essential problem in the field of machine learning. Optimization methods for non-convex problems can be roughly di- vided into first-order methods and second-order methods, depending on the or- der of the derivative to the objective function they used. Generally, to find the local minima, the second-order methods are applied to find the effective direction to escape the saddle point. Specifically, finding the Negative Curvature is considered as the subroutine to analyze the characteristic of the saddle point. However, the calculation of the Negative Curvature is expensive, which prevents the practical usage of second-order algorithms. In this thesis, we present an efficient quantum algorithm aiming to find the negative curvature direction for escaping the saddle point, which is a critical subroutine for many second-order non-convex optimization algorithms. We prove that our algorithm could produce the target state corresponding to the negative curvature direction with query complexity O ̃(polylog(d) ε^(-1)), where d is the dimension of the optimization function. The quantum negative curvature finding algorithm is exponentially faster than any known classical method, which takes time at least O(dε^(-1/2)). Moreover, we propose an efficient quantum algorithm to achieve the classical read-out of the target state. Our classical read-out algorithm runs exponentially faster on the degree of d than existing counterparts

    Deep Learning without Poor Local Minima

    Get PDF
    In this paper, we prove a conjecture published in 1989 and also partially address an open problem announced at the Conference on Learning Theory (COLT) 2015. With no unrealistic assumption, we first prove the following statements for the squared loss function of deep linear neural networks with any depth and any widths: 1) the function is non-convex and non-concave, 2) every local minimum is a global minimum, 3) every critical point that is not a global minimum is a saddle point, and 4) there exist "bad" saddle points (where the Hessian has no negative eigenvalue) for the deeper networks (with more than three layers), whereas there is no bad saddle point for the shallow networks (with three layers). Moreover, for deep nonlinear neural networks, we prove the same four statements via a reduction to a deep linear model under the independence assumption adopted from recent work. As a result, we present an instance, for which we can answer the following question: how difficult is it to directly train a deep model in theory? It is more difficult than the classical machine learning models (because of the non-convexity), but not too difficult (because of the nonexistence of poor local minima). Furthermore, the mathematically proven existence of bad saddle points for deeper models would suggest a possible open problem. We note that even though we have advanced the theoretical foundations of deep learning and non-convex optimization, there is still a gap between theory and practice.Comment: In NIPS 2016. Selected for NIPS oral presentation (top 2% submissions). ---- The final NIPS 2016 version: the results remain the sam

    Inexact Model: A Framework for Optimization and Variational Inequalities

    Get PDF
    In this paper we propose a general algorithmic framework for first-order methods in optimization in a broad sense, including minimization problems, saddle-point problems and variational inequalities. This framework allows to obtain many known methods as a special case, the list including accelerated gradient method, composite optimization methods, level-set methods, proximal methods. The idea of the framework is based on constructing an inexact model of the main problem component, i.e. objective function in optimization or operator in variational inequalities. Besides reproducing known results, our framework allows to construct new methods, which we illustrate by constructing a universal method for variational inequalities with composite structure. This method works for smooth and non-smooth problems with optimal complexity without a priori knowledge of the problem smoothness. We also generalize our framework for strongly convex objectives and strongly monotone variational inequalities.Comment: 41 page
    • …
    corecore