3 research outputs found

    Recurrent neural networks with fixed time convergence for linear and quadratic programming

    Get PDF
    In this paper, a new class of recurrent neural networks which solve linear and quadratic programs are presented. Their design is considered as a sliding mode control problem, where the network structure is based on the Karush-Kuhn-Tucker (KKT) optimality conditions with the KKT multipliers considered as control inputs to be implemented with fixed time stabilizing terms, instead of common used activation functions. Thus, the main feature of the proposed network is its fixed convergence time to the solution. That means, there is time independent to the initial conditions in which the network converges to the optimization solution. Simulations show the feasibility of the current approach

    Model Building and Optimization Analysis of MDF Continuous Hot-Pressing Process by Neural Network

    Get PDF
    We propose a one-layer neural network for solving a class of constrained optimization problems, which is brought forward from the MDF continuous hot-pressing process. The objective function of the optimization problem is the sum of a nonsmooth convex function and a smooth nonconvex pseudoconvex function, and the feasible set consists of two parts, one is a closed convex subset of Rn, and the other is defined by a class of smooth convex functions. By the theories of smoothing techniques, projection, penalty function, and regularization term, the proposed network is modeled by a differential equation, which can be implemented easily. Without any other condition, we prove the global existence of the solutions of the proposed neural network with any initial point in the closed convex subset. We show that any accumulation point of the solutions of the proposed neural network is not only a feasible point, but also an optimal solution of the considered optimization problem though the objective function is not convex. Numerical experiments on the MDF hot-pressing process including the model building and parameter optimization are tested based on the real data set, which indicate the good performance of the proposed neural network in applications

    A One-Layer Recurrent Neural Network for Constrained Pseudoconvex Optimization and its Application for Dynamic Portfolio Optimization

    No full text
    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed
    corecore