52,696 research outputs found

    Discrete-time Contraction Analysis and Controller Design for Nonlinear Processes

    Full text link
    Shifting away from the traditional mass production approach, the process industry is moving towards more agile, cost-effective and dynamic process operation (next-generation smart plants). This warrants the development of control systems for nonlinear chemical processes to be capable of tracking time-varying setpoints to produce products with different specifications as per market demand and deal with variations in the raw materials and utility (e.g., energy). This thesis aims to develop controllers to achieve time-varying setpoints tracking using contraction theory. Through the differential dynamic system framework, the contraction conditions for discrete-time systems, which ensure the exponential convergence between system responses and feasible time-varying references, are derived. The discrete-time differential dissipativity condition is further developed, which can be used for disturbance rejection control designs. Computationally tractable equivalent conditions are then derived and additionally transformed into an Sum of Squares programming problem, such that a discrete-time control contraction metric and stabilising feedback controller can be jointly obtained. Synthesis and implementation details of the resulting contraction-based controller are provided, which can achieve offset-free tracking of feasible time-varying references. To do contraction analysis and control design for systems with uncertainties, which are often complex and difficult, neural networks are used. It involves training and constructing a neural network embedded contraction-based controller. Learning algorithms of uncertain system model parameters are developed. The resulting control scheme is capable of achieving efficient offset-free tracking of time-varying references, with a full range of model uncertainties, without the need for controller structure redesign as the reference or uncertain parameter changes. This neural network based approach also ensures process stability during online simultaneous control and learning of uncertain parameters. To further improve the economics of contraction-based controller, a nonlinear model predictive control approach is developed. Contraction condition is imposed as a constraint on the optimisation problem for model predictive control with an economic cost function, utilising Riemannian weighted graphs and shortest path techniques. The result is a reference flexible and fast optimal controller that can trade off between the rate of target trajectory convergence and economic benefit (away from the desired process objective)

    Feedback control by online learning an inverse model

    Get PDF
    A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made

    Output feedback NN control for two classes of discrete-time systems with unknown control directions in a unified approach

    Get PDF
    10.1109/TNN.2008.2003290IEEE Transactions on Neural Networks19111873-1886ITNN

    Comparative evaluation of approaches in T.4.1-4.3 and working definition of adaptive module

    Get PDF
    The goal of this deliverable is two-fold: (1) to present and compare different approaches towards learning and encoding movements us- ing dynamical systems that have been developed by the AMARSi partners (in the past during the first 6 months of the project), and (2) to analyze their suitability to be used as adaptive modules, i.e. as building blocks for the complete architecture that will be devel- oped in the project. The document presents a total of eight approaches, in two groups: modules for discrete movements (i.e. with a clear goal where the movement stops) and for rhythmic movements (i.e. which exhibit periodicity). The basic formulation of each approach is presented together with some illustrative simulation results. Key character- istics such as the type of dynamical behavior, learning algorithm, generalization properties, stability analysis are then discussed for each approach. We then make a comparative analysis of the different approaches by comparing these characteristics and discussing their suitability for the AMARSi project

    Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns

    Get PDF
    Reservoir computing (RC) is a technique in machine learning inspired by neural systems. RC has been used successfully to solve complex problems such as signal classification and signal generation. These systems are mainly implemented in software, and thereby they are limited in speed and power efficiency. Several optical and optoelectronic implementations have been demonstrated, in which the system has signals with an amplitude and phase. It is proven that these enrich the dynamics of the system, which is beneficial for the performance. In this paper, we introduce a novel optical architecture based on nanophotonic crystal cavities. This allows us to integrate many neurons on one chip, which, compared with other photonic solutions, closest resembles a classical neural network. Furthermore, the components are passive, which simplifies the design and reduces the power consumption. To assess the performance of this network, we train a photonic network to generate periodic patterns, using an alternative online learning rule called first-order reduced and corrected error. For this, we first train a classical hyperbolic tangent reservoir, but then we vary some of the properties to incorporate typical aspects of a photonics reservoir, such as the use of continuous-time versus discrete-time signals and the use of complex-valued versus real-valued signals. Then, the nanophotonic reservoir is simulated and we explore the role of relevant parameters such as the topology, the phases between the resonators, the number of nodes that are biased and the delay between the resonators. It is important that these parameters are chosen such that no strong self-oscillations occur. Finally, our results show that for a signal generation task a complex-valued, continuous-time nanophotonic reservoir outperforms a classical (i.e., discrete-time, real-valued) leaky hyperbolic tangent reservoir (normalized root-mean-square errors = 0.030 versus NRMSE = 0.127)
    corecore