3,376 research outputs found

    Neurodynamic Optimization: towards Nonconvexity

    Get PDF

    Applications of Recurrent Neural Networks to Optimization Problems

    Get PDF

    Recurrent neural networks with fixed time convergence for linear and quadratic programming

    Get PDF
    In this paper, a new class of recurrent neural networks which solve linear and quadratic programs are presented. Their design is considered as a sliding mode control problem, where the network structure is based on the Karush-Kuhn-Tucker (KKT) optimality conditions with the KKT multipliers considered as control inputs to be implemented with fixed time stabilizing terms, instead of common used activation functions. Thus, the main feature of the proposed network is its fixed convergence time to the solution. That means, there is time independent to the initial conditions in which the network converges to the optimization solution. Simulations show the feasibility of the current approach

    Fixed-Time Stable Proximal Dynamical System for Solving MVIPs

    Full text link
    In this paper, a novel modified proximal dynamical system is proposed to compute the solution of a mixed variational inequality problem (MVIP) within a fixed time, where the time of convergence is finite, and is uniformly bounded for all initial conditions. Under the assumptions of strong monotonicity and Lipschitz continuity, it is shown that a solution of the modified proximal dynamical system exists, is uniquely determined and converges to the unique solution of the associated MVIP within a fixed time. As a special case for solving variational inequality problems, the modified proximal dynamical system reduces to a fixed-time stable projected dynamical system. Furthermore, the fixed-time stability of the modified projected dynamical system continues to hold, even if the assumption of strong monotonicity is relaxed to that of strong pseudomonotonicity. Connections to convex optimization problems are discussed, and commonly studied dynamical systems in the continuous-time optimization literature follow as special limiting cases of the modified proximal dynamical system proposed in this paper. Finally, it is shown that the solution obtained using the forward-Euler discretization of the proposed modified proximal dynamical system converges to an arbitrarily small neighborhood of the solution of the associated MVIP within a fixed number of time steps, independent of the initial conditions. Two numerical examples are presented to substantiate the theoretical convergence guarantees.Comment: 12 pages, 5 figure

    Recurrent neural networks with fixed time convergence for linear and quadratic programming

    Get PDF
    In this paper, a new class of recurrent neural networks which solve linear and quadratic programs are presented. Their design is considered as a sliding mode control problem, where the network structure is based on the Karush-Kuhn-Tucker (KKT) optimality conditions with the KKT multipliers considered as control inputs to be implemented with fixed time stabilizing terms, instead of common used activation functions. Thus, the main feature of the proposed network is its fixed convergence time to the solution. That means, there is time independent to the initial conditions in which the network converges to the optimization solution. Simulations show the feasibility of the current approach.Consejo Nacional de Ciencia y Tecnologí
    corecore