6 research outputs found
A Novel Intelligent Control System Design for Water BathTemperature Control
Abstract: In this paper a neuro-fuzzy controller (NFC) for temperaturecontrol of a water bath system is proposed.A five layer neural network is used to adjust input and output parameters of membership function in a fuzzy logic controller. The hybrid learning algorithm is used for training this network. The simulation results show that the proposedcontroller has good set point tracking and disturbance rejectionproperties. Also it is robust against changes in the systemparameters. It is also superior to the conventional PID controller
An Interval Type-2 Fuzzy System with a Species-Based Hybrid Algorithm for Nonlinear System Control Design
We propose a species-based hybrid of the electromagnetism-like mechanism (EM) and back-propagation algorithms (SEMBP) for an interval type-2 fuzzy neural system with asymmetric membership functions (AIT2FNS) design. The interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs) and the TSK-type consequent part are adopted to implement the network structure in AIT2FNS. In addition, the type reduction procedure is integrated into an adaptive network structure to reduce computational complexity. Hence, the AIT2FNS can enhance the approximation accuracy effectively by using less fuzzy rules. The AIT2FNS is trained by the SEMBP algorithm, which contains the steps of uniform initialization, species determination, local search, total force calculation, movement, and evaluation. It combines the advantages of EM and back-propagation (BP) algorithms to attain a faster convergence and a lower computational complexity. The proposed SEMBP algorithm adopts the uniform method (which evenly scatters solution agents over the feasible solution region) and the species technique to improve the algorithm’s ability to find the global optimum. Finally, two illustrative examples of nonlinear systems control are presented to demonstrate the performance and the effectiveness of the proposed AIT2FNS with the SEMBP algorithm
Analysis and implimentation of radial basis function neural network for controlling non-linear dynamical systems
PhD ThesisModelling and control of non-linear systems are not easy, which are now being solved
by the application of neural networks. Neural networks have been proved to solve these
problems as they are described by adjustable parameters which are readily adaptable online.
Many types of neural networks have been used and the most common one is the
backpropagation algorithm. The algorithm has some disadvantages, such as slow
convergence and construction complexity.
An alternative neural networks to overcome the limitations associated with the
backpropagation algorithm is the Radial Basis Function Network which has been widely
used for solving many complex problems. The Radial Basis Function Network is
considered in this theses, along with a new adaptive algorithm which has been developed
to overcome the problem of the optimum parameter selection. Use of the new algorithm
reduces the trial and error of selecting the minimum required number of centres and
guarantees the optimum values of the centres, the widths between the centres and the
network weights.
Computer simulation usmg SimulinklMatlab packages, demonstrated the results of
modelling and control of non-linear systems. Moreover, the algorithm is used for
selecting the optimum parameters of a non-linear real system 'Brushless DC Motor'. In
the laboratory implementation satisfactory results have been achieved, which show that
the Radial Basis Function may be used for modelling and on-line control of such real
non-linear systems.The Libyan Ministry of Higher Education
Reinforcement Learning
Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field