11 research outputs found

    A Nonlinear Projection Neural Network for Solving Interval Quadratic Programming Problems and Its Stability Analysis

    Get PDF
    This paper presents a nonlinear projection neural network for solving interval quadratic programs subject to box-set constraints in engineering applications. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the interval quadratic optimization problems. By employing Lyapunov function approach, the global exponential stability of the proposed neural network is analyzed. Two illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper

    A Nonlinear Projection Neural Network for Solving Interval Quadratic Programming Problems and Its Stability Analysis

    Get PDF
    This paper presents a nonlinear projection neural network for solving interval quadratic programs subject to box-set constraints in engineering applications. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the interval quadratic optimization problems. By employing Lyapunov function approach, the global exponential stability of the proposed neural network is analyzed. Two illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper

    A Nonlinear Projection Neural Network for Solving Interval Quadratic Programming Problems and Its Stability Analysis

    Get PDF
    This paper presents a nonlinear projection neural network for solving interval quadratic programs subject to box-set constraints in engineering applications. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be equivalent to the optimal solution of the interval quadratic optimization problems. By employing Lyapunov function approach, the global exponential stability of the proposed neural network is analyzed. Two illustrative examples are provided to show the feasibility and the efficiency of the proposed method in this paper

    Applications of Recurrent Neural Networks to Optimization Problems

    Get PDF

    Robust Linear Neural Network for Constrained Quadratic Optimization

    Get PDF
    Based on the feature of projection operator under box constraint, by using convex analysis method, this paper proposed three robust linear systems to solve a class of quadratic optimization problems. Utilizing linear matrix inequality (LMI) technique, eigenvalue perturbation theory, Lyapunov-Razumikhin method, and LaSalle’s invariance principle, some stable criteria for the related models are also established. Compared with previous criteria derived in the literature cited herein, the stable criteria established in this paper are less conservative and more practicable. Finally, a numerical simulation example and an application example in compressed sensing problem are also given to illustrate the validity of the criteria established in this paper

    Riverine Microplastic Quantification: A Novel Approach Integrating Satellite Images, Neural Network, and Suspended Sediment Data as a Proxy

    Get PDF
    Rivers transport terrestrial microplastics (MP) to the marine system, demanding cost-effective and frequent monitoring, which is attainable through remote sensing. This study aims to develop and test microplastic concentration (MPC) models directly by satellite images and indirectly through suspended sediment concentration (SSC) as a proxy employing a neural network algorithm. These models relied upon high spatial (26 sites) and temporal (198 samples) SSC and MPC data in the Tisza River, along with optical and active sensor reflectance/backscattering. A feedforward MLP neural network was used to calibrate and validate the direct models employing k-fold cross-validation (five data folds) and the Optuna library for hyperparameter optimization. The spatiotemporal generalization capability of the developed models was assessed under various hydrological scenarios. The findings revealed that hydrology fundamentally influences the SSC and MPC. The indirect estimation method of MPC using SSC as a proxy demonstrated higher accuracy (R2 = 0.17–0.88) than the direct method (R2 = 0–0.2), due to the limitations of satellite sensors to directly estimate the very low MPCs in rivers. However, the estimation accuracy of the indirect method varied with lower accuracy (R2 = 0.17, RMSE = 12.9 item/m3 and MAE = 9.4 item/m3) during low stages and very high (R2 = 0.88, RMSE = 7.8 item/m3 and MAE = 10.8 item/m3) during floods. The worst estimates were achieved based on Sentinel-1. Although the accuracy of the MPC models is moderate, it still has practical applicability, especially during floods and employing proxy models. This study is one of the very initial attempts towards MPC quantification, thus more studies incorporating denser spatiotemporal data, additional water quality parameters, and surface roughness data are warranted to improve the estimation accuracy

    A neurodynamic optimization approach to constrained pseudoconvex optimization.

    Get PDF
    Guo, Zhishan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 71-82).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement i --- p.iiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Constrained Pseudoconvex Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.4Chapter 1.3 --- Thesis Organization --- p.7Chapter 2 --- Literature Review --- p.8Chapter 2.1 --- Pseudo convex Optimization --- p.8Chapter 2.2 --- Recurrent Neural Networks --- p.10Chapter 3 --- Model Description and Convergence Analysis --- p.17Chapter 3.1 --- Model Descriptions --- p.18Chapter 3.2 --- Global Convergence --- p.20Chapter 4 --- Numerical Examples --- p.27Chapter 4.1 --- Gaussian Optimization --- p.28Chapter 4.2 --- Quadratic Fractional Programming --- p.36Chapter 4.3 --- Nonlinear Convex Programming --- p.39Chapter 5 --- Real-time Data Reconciliation --- p.42Chapter 5.1 --- Introduction --- p.42Chapter 5.2 --- Theoretical Analysis and Performance Measurement --- p.44Chapter 5.3 --- Examples --- p.45Chapter 6 --- Real-time Portfolio Optimization --- p.53Chapter 6.1 --- Introduction --- p.53Chapter 6.2 --- Model Description --- p.54Chapter 6.3 --- Theoretical Analysis --- p.56Chapter 6.4 --- Illustrative Examples --- p.58Chapter 7 --- Conclusions and Future Works --- p.67Chapter 7.1 --- Concluding Remarks --- p.67Chapter 7.2 --- Future Works --- p.68Chapter A --- Publication List --- p.69Bibliography --- p.7

    Solução de problemas de otimização linear por redes neurais associadas a metodos de pontos interiores

    Get PDF
    Orientadores : Christiano Lyra Filho, Aurelio Ribeiro Leite de OliveiraTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de ComputaçãoDoutorad

    Recurrent neural networks for force optimization of multi-fingered robotic hands.

    Get PDF
    Fok Lo Ming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.Includes bibliographical references (leaves 133-135).Abstracts in English and Chinese.Chapter 1. --- Introduction --- p.1Chapter 1.1 --- Multi-fingered Robotic Hands --- p.1Chapter 1.2 --- Grasping Force Optimization --- p.2Chapter 1.3 --- Neural Networks --- p.6Chapter 1.4 --- Previous Work for Grasping Force Optimization --- p.9Chapter 1.5 --- Contributions of this work --- p.10Chapter 1.6 --- Organization of this thesis --- p.12Chapter 2. --- Problem Formulations --- p.13Chapter 2.1 --- Grasping Force Optimization without Joint Torque Limits --- p.14Chapter 2.1.1 --- Linearized Friction Cone Approach --- p.15Chapter i. --- Linear Formulation --- p.17Chapter ii. --- Quadratic Formulation --- p.18Chapter 2.1.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.19Chapter 2.1.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.20Chapter 2.2 --- Grasping Force Optimization with Joint Torque Limits --- p.21Chapter 2.2.1 --- Linearized Friction Cone Approach --- p.23Chapter 2.2.2 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.23Chapter 2.3 --- Grasping Force Optimization with Time-varying External Wrench --- p.24Chapter 2.3.1 --- Linearized Friction Cone Approach --- p.25Chapter 2.3.2 --- Nonlinear Friction Cone as Positive Semidefinite Matrix --- p.25Chapter 2.3.3 --- Constrained Optimization with Nonlinear Inequality Constraint --- p.26Chapter 3. --- Recurrent Neural Network Models --- p.27Chapter 3.1 --- Networks for Grasping Force Optimization without Joint Torque LimitsChapter 3.1.1 --- The Primal-dual Network for Linear Programming --- p.29Chapter 3.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.32Chapter 3.1.3 --- The Primal-dual Network for Quadratic Programming --- p.34Chapter 3.1.4 --- The Dual Network --- p.35Chapter 3.1.5 --- The Deterministic Annealing Network --- p.39Chapter 3.1.6 --- The Novel Network --- p.41Chapter 3.2 --- Networks for Grasping Force Optimization with Joint Torque LimitsChapter 3.2.1 --- The Dual Network --- p.43Chapter 3.2.2 --- The Novel Network --- p.45Chapter 3.3 --- Networks for Grasping Force Optimization with Time-varying External WrenchChapter 3.3.1 --- The Primal-dual Network for Quadratic Programming --- p.48Chapter 3.3.2 --- The Deterministic Annealing Network --- p.50Chapter 3.3.3 --- The Novel Network --- p.52Chapter 4. --- Simulation Results --- p.54Chapter 4.1 --- Three-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.54Chapter 4.1.1 --- The Primal-dual Network for Linear Programming --- p.57Chapter 4.1.2 --- The Deterministic Annealing Network for Linear Programming --- p.59Chapter 4.1.3 --- The Primal-dual Network for Quadratic Programming --- p.61Chapter 4.1.4 --- The Dual Network --- p.63Chapter 4.1.5 --- The Deterministic Annealing Network --- p.65Chapter 4.1.6 --- The Novel Network --- p.57Chapter 4.1.7 --- Network Complexity Analysis --- p.59Chapter 4.2 --- Four-finger Grasping Example of Grasping Force Optimization without Joint Torque Limits --- p.73Chapter 4.2.1 --- The Primal-dual Network for Linear Programming --- p.75Chapter 4.2.2 --- The Deterministic Annealing Network for Linear Programming --- p.77Chapter 4.2.3 --- The Primal-dual Network for Quadratic Programming --- p.79Chapter 4.2.4 --- The Dual Network --- p.81Chapter 4.2.5 --- The Deterministic Annealing Network --- p.83Chapter 4.2.6 --- The Novel Network --- p.85Chapter 4.2.7 --- Network Complexity Analysis --- p.87Chapter 4.3 --- Three-finger Grasping Example of Grasping Force Optimization with Joint Torque Limits --- p.90Chapter 4.3.1 --- The Dual Network --- p.93Chapter 4.3.2 --- The Novel Network --- p.95Chapter 4.3.3 --- Network Complexity Analysis --- p.97Chapter 4.4 --- Three-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.99Chapter 4.4.1 --- The Primal-dual Network for Quadratic Programming --- p.101Chapter 4.4.2 --- The Deterministic Annealing Network --- p.103Chapter 4.4.3 --- The Novel Network --- p.105Chapter 4.4.4 --- Network Complexity Analysis --- p.107Chapter 4.5 --- Four-finger Grasping Example of Grasping Force Optimization with Time-varying External Wrench --- p.109Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.111Chapter 4.5.2 --- The Deterministic Annealing Network --- p.113Chapter 4.5.3 --- The Novel Network --- p.115Chapter 5.5.4 --- Network Complexity Analysis --- p.117Chapter 4.6 --- Four-finger Grasping Example of Grasping Force Optimization with Nonlinear Velocity Variation --- p.119Chapter 4.5.1 --- The Primal-dual Network for Quadratic Programming --- p.121Chapter 4.5.2 --- The Deterministic Annealing Network --- p.123Chapter 4.5.3 --- The Novel Network --- p.125Chapter 5.5.4 --- Network Complexity Analysis --- p.127Chapter 5. --- Conclusions and Future Work --- p.129Publications --- p.132Bibliography --- p.133Appendix --- p.13
    corecore