1,114 research outputs found
Nonlinear modelling and optimal control via Takagi-Sugeno fuzzy techniques: A quadrotor stabilization
Using the principles of Takagi-Sugeno fuzzy modelling allows the integration of flexible fuzzy approaches and rigorous mathematical tools of linear system theory into one common framework. The rule-based T-S fuzzy model splits a nonlinear system into several linear subsystems. Parallel Distributed Compensation (PDC) controller synthesis uses these T-S fuzzy model rules. The resulting fuzzy controller is nonlinear, based on fuzzy aggregation of state controllers of individual linear subsystems. The system is optimized by the linear quadratic control (LQC) method, its stability is analysed using the Lyapunov method. Stability conditions are guaranteed by a system of linear matrix inequalities (LMIs) formulated and solved for the closed loop system with the proposed PDC controller. The additional GA optimization procedure is introduced, and a new type of its fitness function is proposed to improve the closed-loop system performance.Web of Science71110
The application of neural networks in active suspension
This thesis considers the application of neural networks to automotive suspension
systems. In particular their ability to learn non-linear feedback control
relationships. The speed of processing, once trained, means that neural networks
open up new opportunities and allow increased complexity in the control
strategies employed.
The suitability of neural networks for this task is demonstrated here using multilayer
perceptron, (MLP) feed forward neural networks applied to a quarter vehicle
simulation model. Initially neural networks are trained from a training data set
created using a non-linear optimal control strategy, the complexity of which
prohibits its direct use. They are shown to be successful in learning the
relationship between the current system states and the optimal control. [Continues.
Identification and Optimal Linear Tracking Control of ODU Autonomous Surface Vehicle
Autonomous surface vehicles (ASVs) are being used for diverse applications of civilian and military importance such as: military reconnaissance, sea patrol, bathymetry, environmental monitoring, and oceanographic research. Currently, these unmanned tasks can accurately be accomplished by ASVs due to recent advancements in computing, sensing, and actuating systems. For this reason, researchers around the world have been taking interest in ASVs for the last decade. Due to the ever-changing surface of water and stochastic disturbances such as wind and tidal currents that greatly affect the path-following ability of ASVs, identification of an accurate model of inherently nonlinear and stochastic ASV system and then designing a viable control using that model for its planar motion is a challenging task. For planar motion control of ASV, the work done by researchers is mainly based on the theoretical modeling in which the nonlinear hydrodynamic terms are determined, while some work suggested the nonlinear control techniques and adhered to simulation results. Also, the majority of work is related to the mono- or twin-hull ASVs with a single rudder. The ODU-ASV used in present research is a twin-hull design having two DC trolling motors for path-following motion.
A novel approach of time-domain open-loop observer Kalman filter identifications (OKID) and state-feedback optimal linear tracking control of ODU-ASV is presented, in which a linear state-space model of ODU-ASV is obtained from the measured input and output data. The accuracy of the identified model for ODU-ASV is confirmed by validation results of model output data reconstruction and benchmark residual analysis. Then, the OKID-identified model of the ODU-ASV is utilized to design the proposed controller for its planar motion such that a predefined cost function is minimized using state and control weighting matrices, which are determined by a multi-objective optimization genetic algorithm technique. The validation results of proposed controller using step inputs as well as sinusoidal and arc-like trajectories are presented to confirm the controller performance. Moreover, real-time water-trials were performed and their results confirm the validity of proposed controller in path-following motion of ODU-ASV
Controlling Chaotic Maps using Next-Generation Reservoir Computing
In this work, we combine nonlinear system control techniques with
next-generation reservoir computing, a best-in-class machine learning approach
for predicting the behavior of dynamical systems. We demonstrate the
performance of the controller in a series of control tasks for the chaotic
H\'enon map, including controlling the system between unstable fixed-points,
stabilizing the system to higher order periodic orbits, and to an arbitrary
desired state. We show that our controller succeeds in these tasks, requires
only 10 data points for training, can control the system to a desired
trajectory in a single iteration, and is robust to noise and modeling error.Comment: 9 pages, 8 figure
Intelligent failure-tolerant control
An overview of failure-tolerant control is presented, beginning with robust control, progressing through parallel and analytical redundancy, and ending with rule-based systems and artificial neural networks. By design or implementation, failure-tolerant control systems are 'intelligent' systems. All failure-tolerant systems require some degrees of robustness to protect against catastrophic failure; failure tolerance often can be improved by adaptivity in decision-making and control, as well as by redundancy in measurement and actuation. Reliability, maintainability, and survivability can be enhanced by failure tolerance, although each objective poses different goals for control system design. Artificial intelligence concepts are helpful for integrating and codifying failure-tolerant control systems, not as alternatives but as adjuncts to conventional design methods
Computation Approaches for Continuous Reinforcement Learning Problems
Optimisation theory is at the heart of any control process, where we seek to control the behaviour of a system through a set of actions. Linear control problems have been extensively studied, and optimal control laws have been identified. But the world around us is highly non-linear and unpredictable. For these dynamic systems, which don’t possess the nice mathematical properties of the linear counterpart, the classic control theory breaks and other methods have to be employed. But nature thrives by optimising non-linear and over-complicated systems. Evolutionary Computing (EC) methods exploit nature’s way by imitating the evolution process
and avoid to solve the control problem analytically.
Reinforcement Learning (RL) from the other side regards the optimal control problem as a sequential one. In every discrete time step an action is applied. The transition of the system to a new state is accompanied by a sole numerical value, the “reward” that designate the quality of the control action. Even though the amount of feedback information is limited into a sole
real number, the introduction of the Temporal Difference method made possible to have accurate predictions of the value-functions. This paved the way to optimise complex structures, like the Neural Networks, which are used to approximate the value functions.
In this thesis we investigate the solution of continuous Reinforcement Learning control problems by EC methodologies. The accumulated reward of such problems throughout an episode suffices as information to formulate the required measure, fitness, in order to optimise a population of candidate solutions. Especially, we explore the limits of applicability of a specific branch of EC, that of Genetic Programming (GP). The evolving population in the GP case is comprised
from individuals, which are immediately translated to mathematical functions, which can serve
as a control law.
The major contribution of this thesis is the proposed unification of these disparate Artificial Intelligence paradigms. The provided information from the systems are exploited by a step by step basis from the RL part of the proposed scheme and by an episodic basis from GP. This makes possible to augment the function set of the GP scheme with adaptable Neural Networks. In the quest to achieve stable behaviour of the RL part of the system a modification of the Actor-Critic
algorithm has been implemented.
Finally we successfully apply the GP method in multi-action control problems extending the spectrum of the problems that this method has been proved to solve. Also we investigated the capability of GP in relation to problems from the food industry. These type of problems exhibit also non-linearity and there is no definite model describing its behaviour
A delay-dependent dual-rate PID controller over an ethernet network
n this paper, a methodology to design controllers able to cope with different load conditions on an Ethernet network is introduced. Load conditions induce time-varying delays between measurements and control. To face these variations an interpolated, delay-dependent gain scheduling law is used. The lack of synchronization is solved by adopting an event-based control approach. The dual-rate control action computation is carried out at a remote controller, whereas control actions and measurements are taken out locally at the controlled process site. Stability is proved in terms of probabilistic linear matrix inequalities. TrueTime simulations in an Ethernet case show the benefit of the proposal, which is later validated on an experimental test-bed Ethernet environment.Manuscript received June 07, 2010; revised September 05, 2010; accepted September 15, 2010. Date of publication October 18, 2010; date of current version February 04, 2011. The authors A. Cuenca, J. Salt, and R. Piza are grateful to Grant PAID06-08 by the Universidad Politecnica de Valencia, Grant dpi2009-14744-c03-03 from the Spanish Ministry of Education, and Grant gv/2010/018 by Generalitat Valenciana. In addition, A. Cuenca is grateful to Grant dpi2008-06737-c02-01 by the Spanish Ministry of Education, and A. Sala is grateful to the financial support of the Spanish Ministry of Education Research Grant dpi2008-06731-c02-01, and Generalitat Valenciana Grant prometeo/2008/088. Paper no. TII-10-06-0127.Cuenca Lacruz, ÁM.; Salt Llobregat, JJ.; Sala Piqueras, A.; Pizá Fernández, R. (2011). A delay-dependent dual-rate PID controller over an ethernet network. IEEE Transactions on Industrial Informatics. 7(1):18-29. doi:10.1109/TII.2010.2085007S18297
- …