333 research outputs found

    Continuous-time recurrent neural networks for quadratic programming: theory and engineering applications.

    Get PDF
    Liu Shubao.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 90-98).Abstracts in English and Chinese.Abstract --- p.i摘要 --- p.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Time-Varying Quadratic Optimization --- p.1Chapter 1.2 --- Recurrent Neural Networks --- p.3Chapter 1.2.1 --- From Feedforward to Recurrent Networks --- p.3Chapter 1.2.2 --- Computational Power and Complexity --- p.6Chapter 1.2.3 --- Implementation Issues --- p.7Chapter 1.3 --- Thesis Organization --- p.9Chapter I --- Theory and Models --- p.11Chapter 2 --- Linearly Constrained QP --- p.13Chapter 2.1 --- Model Description --- p.14Chapter 2.2 --- Convergence Analysis --- p.17Chapter 3 --- Quadratically Constrained QP --- p.26Chapter 3.1 --- Problem Formulation --- p.26Chapter 3.2 --- Model Description --- p.27Chapter 3.2.1 --- Model 1 (Dual Model) --- p.28Chapter 3.2.2 --- Model 2 (Improved Dual Model) --- p.28Chapter II --- Engineering Applications --- p.29Chapter 4 --- KWTA Network Circuit Design --- p.31Chapter 4.1 --- Introduction --- p.31Chapter 4.2 --- Equivalent Reformulation --- p.32Chapter 4.3 --- KWTA Network Model --- p.36Chapter 4.4 --- Simulation Results --- p.40Chapter 4.5 --- Conclusions --- p.40Chapter 5 --- Dynamic Control of Manipulators --- p.43Chapter 5.1 --- Introduction --- p.43Chapter 5.2 --- Problem Formulation --- p.44Chapter 5.3 --- Simplified Dual Neural Network --- p.47Chapter 5.4 --- Simulation Results --- p.51Chapter 5.5 --- Concluding Remarks --- p.55Chapter 6 --- Robot Arm Obstacle Avoidance --- p.56Chapter 6.1 --- Introduction --- p.56Chapter 6.2 --- Obstacle Avoidance Scheme --- p.58Chapter 6.2.1 --- Equality Constrained Formulation --- p.58Chapter 6.2.2 --- Inequality Constrained Formulation --- p.60Chapter 6.3 --- Simplified Dual Neural Network Model --- p.64Chapter 6.3.1 --- Existing Approaches --- p.64Chapter 6.3.2 --- Model Derivation --- p.65Chapter 6.3.3 --- Convergence Analysis --- p.67Chapter 6.3.4 --- Model Comparision --- p.69Chapter 6.4 --- Simulation Results --- p.70Chapter 6.5 --- Concluding Remarks --- p.71Chapter 7 --- Multiuser Detection --- p.77Chapter 7.1 --- Introduction --- p.77Chapter 7.2 --- Problem Formulation --- p.78Chapter 7.3 --- Neural Network Architecture --- p.82Chapter 7.4 --- Simulation Results --- p.84Chapter 8 --- Conclusions and Future Works --- p.88Chapter 8.1 --- Concluding Remarks --- p.88Chapter 8.2 --- Future Prospects --- p.88Bibliography --- p.8

    Design and analysis of recurrent neural network models with non‐linear activation functions for solving time‐varying quadratic programming problems

    Get PDF
    A special recurrent neural network (RNN), that is the zeroing neural network (ZNN), is adopted to find solutions to time‐varying quadratic programming (TVQP) problems with equality and inequality constraints. However, there are some weaknesses in activation functions of traditional ZNN models, including convex restriction and redundant formulation. With the aid of different activation functions, modified ZNN models are obtained to overcome the drawbacks for solving TVQP problems. Theoretical and experimental research indicate that the proposed models are better and more effective at solving such TVQP problems

    A recurrent neural network applied to optimal motion control of mobile robots with physical constraints

    Get PDF
    Conventional solutions, such as the conventional recurrent neural network (CRNN) and gradient recurrent neural network (GRNN), for the motion control of mobile robots in the unified framework of recurrent neural network (RNN) are difficult to simultaneously consider both criteria optimization and physical constraints. The limitation of the RNN solution may lead to the damage of mobile robots for exceeding physical constraints during the task execution. To overcome this limitation, this paper proposes a novel inequality and equality constrained optimization RNN (IECORNN) to handle the motion control of mobile robots. Firstly, the real-time motion control problem with both criteria optimization and physical constraints is skillfully converted to a real-time equality system by leveraging the Lagrange multiplier rule. Then, the detailed design process for the proposed IECORNN is presented together with the neural network architecture developed. Afterward, theoretical analyses on the motion control problem conversion equivalence, global stability, and exponential convergence property are rigorously provided. Finally, two numerical simulation verifications and extensive comparisons with other existing RNNs, e.g., the CRNN and the GRNN, based on the mobile robot for two different path-tracking applications sufficiently demonstrate the effectiveness and superiority of the proposed IECORNN for the real-time motion control of mobile robots with both criteria optimization and physical constraints. This work makes progresses in both theory as well as practice, and fills the vacancy in the unified framework of RNN in motion control of mobile robots

    An L₁-Norm Based Optimization Method for Sparse Redundancy Resolution of Robotic Manipulators

    Get PDF
    For targeted motion control tasks of manipulators, it is frequently necessary to make use of full levels of joint actuation to guarantee successful motion planning and path tracking. Such way of motion planning and control may keep the joint actuation in a non-sparse manner during motion control process. In order to improve sparsity of joint actuation for manipulator systems, a novel motion planning scheme which can optimally and sparsely adopt joint actuation is proposed in this paper. The proposed motion planning strategy is formulated as a constrained L1 norm optimization problem, and an equivalent enhanced optimization solution dealing with bounded joint velocity is proposed as well. A new primal dual neural network with a new solution set division is further proposed and applied to solve such bounded optimization which can sparsely adopt joint actuation for motion control. Simulation and experiment results demonstrate the efficiency, accuracy and superiority of the proposed method for optimally and sparsely adopting joint actuation. The average sparsity (i.e., -||˙θ||p where θ denotes the joint angle) of the joint motion of the manipulator can be increased by 39.22% and 51.30% for path tracking tasks in X-Y and X-Z planes respectively, indicating that the sparsity of joint actuation can be enhanced

    Model Predictive Control of Nonholonomic Chained Systems Using General Projection Neural Networks Optimization

    Full text link

    Distributed Optimization with Application to Power Systems and Control

    Get PDF
    In many engineering domains, systems are composed of partially independent subsystems—power systems are composed of distribution and transmission systems, teams of robots are composed of individual robots, and chemical process systems are composed of vessels, heat exchangers and reactors. Often, these subsystems should reach a common goal such as satisfying a power demand with minimum cost, flying in a formation, or reaching an optimal set-point. At the same time, limited information exchange is desirable—for confidentiality reasons but also due to communication constraints. Moreover, a fast and reliable decision process is key as applications might be safety-critical. Mathematical optimization techniques are among the most successful tools for controlling systems optimally with feasibility guarantees. Yet, they are often centralized—all data has to be collected in one central and computationally powerful entity. Methods from distributed optimization control the subsystems in a distributed or decentralized fashion, reducing or avoiding central coordination. These methods have a long and successful history. Classical distributed optimization algorithms, however, are typically designed for convex problems. Hence, they are only partially applicable in the above domains since many of them lead to optimization problems with non-convex constraints. This thesis develops one of the first frameworks for distributed and decentralized optimization with non-convex constraints. Based on the Augmented Lagrangian Alternating Direction Inexact Newton (ALADIN) algorithm, a bi-level distributed ALADIN framework is presented, solving the coordination step of ALADIN in a decentralized fashion. This framework can handle various decentralized inner algorithms, two of which we develop here: a decentralized variant of the Alternating Direction Method of Multipliers (ADMM) and a novel decentralized Conjugate Gradient algorithm. Decentralized conjugate gradient is to the best of our knowledge the first decentralized algorithm with a guarantee of convergence to the exact solution in a finite number of iterates. Sufficient conditions for fast local convergence of bi-level ALADIN are derived. Bi-level ALADIN strongly reduces the communication and coordination effort of ALADIN and preserves its fast convergence guarantees. We illustrate these properties on challenging problems from power systems and control, and compare performance to the widely used ADMM. The developed methods are implemented in the open-source MATLAB toolbox ALADIN-—one of the first toolboxes for decentralized non-convex optimization. ALADIN- comes with a rich set of application examples from different domains showing its broad applicability. As an additional contribution, this thesis provides new insights why state-of-the-art distributed algorithms might encounter issues for constrained problems

    (Global) Optimization: Historical notes and recent developments

    Get PDF
    Recent developments in (Global) Optimization are surveyed in this paper. We collected and commented quite a large number of recent references which, in our opinion, well represent the vivacity, deepness, and width of scope of current computational approaches and theoretical results about nonconvex optimization problems. Before the presentation of the recent developments, which are subdivided into two parts related to heuristic and exact approaches, respectively, we briefly sketch the origin of the discipline and observe what, from the initial attempts, survived, what was not considered at all as well as a few approaches which have been recently rediscovered, mostly in connection with machine learning
    corecore