138 research outputs found
Evolutionary Learning of Fuzzy Rules for Regression
The objective of this PhD Thesis is to design Genetic Fuzzy Systems (GFS) that learn Fuzzy Rule Based Systems to solve regression problems in a general manner. Particularly, the aim is to obtain models with low complexity while maintaining high precision without using expert-knowledge about the problem to be solved. This means that the GFSs have to work with raw data, that is, without any preprocessing that help the learning process to solve a particular problem. This is of particular interest, when no knowledge about the input data is available or for a first approximation to the problem. Moreover, within this objective, GFSs have to cope with large scale problems, thus the algorithms have to scale with the data
Automatic synthesis of fuzzy systems: An evolutionary overview with a genetic programming perspective
Studies in Evolutionary Fuzzy Systems (EFSs) began in the 90s and have experienced a fast development since then, with applications to areas such as pattern recognition, curveâfitting and regression, forecasting and control. An EFS results from the combination of a Fuzzy Inference System (FIS) with an Evolutionary Algorithm (EA). This relationship can be established for multiple purposes: fineâtuning of FIS's parameters, selection of fuzzy rules, learning a rule base or membership functions from scratch, and so forth. Each facet of this relationship creates a strand in the literature, as membership function fineâtuning, fuzzy ruleâbased learning, and so forth and the purpose here is to outline some of what has been done in each aspect. Special focus is given to Genetic Programmingâbased EFSs by providing a taxonomy of the main architectures available, as well as by pointing out the gaps that still prevail in the literature. The concluding remarks address some further topics of current research and trends, such as interpretability analysis, multiobjective optimization, and synthesis of a FIS through Evolving methods
Context dependent fuzzy modelling and its applications
Fuzzy rule-based systems (FRBS) use the principle of fuzzy sets and fuzzy logic to describe vague and imprecise statements and provide a facility to express the behaviours of the system with a human-understandable language. Fuzzy information, once defined by a fuzzy system, is fixed regardless of the circumstances and therefore makes it very difficult to capture the effect of context on the meaning of the fuzzy terms. While efforts have been made to integrate contextual information into the representation of fuzzy sets, it remains the case that often the context model is very restrictive and/or problem specific. The work reported in this thesis is our attempt to create a practical frame work to integrate contextual information into the representation of fuzzy sets so as to improve the interpretability as well as the accuracy of the fuzzy system. Throughout this thesis, we have looked at the capability of the proposed context dependent fuzzy sets as a stand alone as well as in combination with other methods in various application scenarios ranging from time series forecasting to complicated car racing control systems. In all of the applications, the highly competitive performance nature of our approach has proven its effectiveness and efficiency compared with existing techniques in the literature
Type-2 Takagi-Sugeno-Kang Fuzzy Logic System and Uncertainty in Machining
RĂSUMĂ: Plusieurs mĂ©thodes permettent aujourdâhui dâanalyser le comportement des Ă©coulements
qui rĂ©gissent le fonctionnement de systĂšmes rencontrĂ©s dans lâindustrie (vĂ©hicules aĂ©riens,
marins et terrestres, gĂ©nĂ©ration dâĂ©nergie, etc.). Pour les Ă©coulements transitoires ou
turbulents, les méthodes expérimentales sont utilisées conjointement avec les simulations
numĂ©riques (simulation directe ou faisant appel Ă des modĂšles) afin dâextraire le plus
dâinformation possible. Dans les deux cas, les mĂ©thodes gĂ©nĂšrent des quantitĂ©s de donnĂ©es
importantes qui doivent ensuite ĂȘtre traitĂ©es et analysĂ©es. Ce projet de recherche vise Ă
amĂ©liorer notre capacitĂ© dâanalyse pour lâĂ©tude des Ă©coulements simulĂ©s numĂ©riquement
et les Ă©coulements obtenus Ă lâaide de mĂ©thodes de mesure (par exemple la vĂ©locimĂ©trie
par image de particules PIV ).
Lâabsence, jusquâĂ aujourdâhui, dâune dĂ©finition objective dâune structure tourbillonnaire
a conduit Ă lâutilisation de plusieurs mĂ©thodes eulĂ©riennes (vorticitĂ©, critĂšre Q,
Lambda-2, etc.), souvent inadaptées, pour extraire les structures cohérentes des écoulements.
Lâexposant de Lyapunov, calculĂ© sur un temps fini (appelĂ© le FTLE), sâest rĂ©vĂ©lĂ©
comme une alternative lagrangienne efficace à ces méthodes classiques. Cependant, la
mĂ©thodologie de calcul actuelle du FTLE exige lâĂ©valuation numĂ©rique dâun grand nombre
de trajectoires sur une grille cartésienne qui est superposée aux champs de vitesse
simulés ou mesurés. Le nombre de noeuds nécessaire pour représenter un champ FTLE
dâun Ă©coulement 3D instationnaire atteint facilement plusieurs millions, ce qui nĂ©cessite
des ressources informatiques importantes pour une analyse adéquate.
Dans ce projet, nous visons Ă amĂ©liorer lâefficacitĂ© du calcul du champ FTLE en
proposant une méthode alternative au calcul classique des composantes du tenseur de
dĂ©formation de Cauchy-Green. Un ensemble dâĂ©quations diffĂ©rentielles ordinaires (EDOs)
est utilisé pour calculer simultanément les trajectoires des particules et les dérivées premiÚres
et secondes du champ de déplacement, ce qui se traduit par une amélioration de
la précision nodale des composantes du tenseur. Les dérivées premiÚres sont utilisées
pour le calcul de lâexposant de Lyapunov et les dĂ©rivĂ©es secondes pour lâestimation de
lâerreur dâinterpolation. Les matrices hessiennes du champ de dĂ©placement (deux matrices
en 2D et trois matrices en 3D) nous permettent de construire une métrique optimale
multi-échelle et de générer un maillage anisotrope non structuré de façon à distribuer efficacement
les noeuds et Ă minimiser lâerreur dâinterpolation.----------ABSTRACT: Several methods can help us to analyse the behavior of flows that govern the operation
of fluid flow systems encountered in the industry (aerospace, marine and terrestrial
transportation, power generation, etc..). For transient or turbulent flows, experimental
methods are used in conjunction with numerical simulations ( direct simulation or based
on models) to extract as much information as possible. In both cases, these methods
generate massive amounts of data which must then be processed and analyzed. This
research project aims to improve the post-processing algorithms to facilitate the study
of numerically simulated flows and those obtained using measurement techniques (e.g.
particle image velocimetry PIV ).
The absence, even until today, of an objective definition of a vortex has led to the
use of several Eulerian methods (vorticity, the Q and the Lambda-2 criteria, etc..), often
unsuitable to extract the flow characteristics. The Lyapunov exponent, calculated on a
finite time (the so-called FTLE), is an effective Lagrangian alternative to these standard
methods. However, the computation methodology currently used to obtain the FTLE
requires numerical evaluation of a large number of fluid particle trajectories on a Cartesian
grid that is superimposed on the simulated or measured velocity fields. The number of
nodes required to visualize a FTLE field of an unsteady 3D flow can easily reach several
millions, which requires significant computing resources for an adequate analysis.
In this project, we aim to improve the computational efficiency of the FTLE field
by providing an alternative to the conventional calculation of the components of the
Cauchy-Green deformation tensor. A set of ordinary differential equations (ODEs) is
used to calculate the particle trajectories and simultaneously the first and the second
derivatives of the displacement field, resulting in a highly improved accuracy of nodal
tensor components. The first derivatives are used to calculate the Lyapunov exponent
and the second derivatives to estimate the interpolation error. Hessian matrices of the
displacement field (two matrices in 2D and three matrices in 3D) allow us to build a
multi-scale optimal metric and generate an unstructured anisotropic mesh to efficiently
distribute nodes and to minimize the interpolation error. The flexibility of anisotropic
meshes allows to add and align nodes near the structures of the flow and to remove
those in areas of low interest. The mesh adaptation is based on the intersection of the
Hessian matrices of the displacement field and not on the FTLE field
Reinforcement Learning
Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field
Information fusion schemes for real time risk assessment in adaptive control systems
Intelligent Flight Control System (IFCS) deploys a neural network for in-flight aircraft failure accommodation. Verification and validation (V&V) of adaptive systems is a challenging research problem. Our approach to V&V relies on real-time monitoring of neural network learning. Monitors detect learning anomalies and react to different failure conditions. We investigated data fusion techniques suitable for the analysis of neural network monitors. Monitor outputs are fused into a measure of confidence, indicating the belief in the correctness of failure accommodation mechanism provided by the neural network. We investigated two data fusion techniques, one based on Dempster-Shafer theory and the other based on fuzzy logic. Our techniques were applied to nine flight simulation datasets including those with failures. The monitor fusion algorithms provide unique, meaningful and novel technique for V&V of adaptive flight control systems. Being theoretically sound, the algorithms can be applied to a broad range of other data fusion applications
Context dependent fuzzy modelling and its applications
Fuzzy rule-based systems (FRBS) use the principle of fuzzy sets and fuzzy logic to describe vague and imprecise statements and provide a facility to express the behaviours of the system with a human-understandable language. Fuzzy information, once defined by a fuzzy system, is fixed regardless of the circumstances and therefore makes it very difficult to capture the effect of context on the meaning of the fuzzy terms. While efforts have been made to integrate contextual information into the representation of fuzzy sets, it remains the case that often the context model is very restrictive and/or problem specific. The work reported in this thesis is our attempt to create a practical frame work to integrate contextual information into the representation of fuzzy sets so as to improve the interpretability as well as the accuracy of the fuzzy system. Throughout this thesis, we have looked at the capability of the proposed context dependent fuzzy sets as a stand alone as well as in combination with other methods in various application scenarios ranging from time series forecasting to complicated car racing control systems. In all of the applications, the highly competitive performance nature of our approach has proven its effectiveness and efficiency compared with existing techniques in the literature
A fuzzy probabilistic inference methodology for constrained 3D human motion classification
Enormous uncertainties in unconstrained human motions lead to a fundamental challenge that many recognising algorithms have to face in practice: efficient and correct motion recognition is a demanding task, especially when human kinematic motions are subject to variations of execution in the spatial and temporal domains, heavily overlap with each other,and are occluded. Due to the lack of a good solution to these problems, many existing methods tend to be either effective but computationally intensive or efficient but vulnerable to misclassification. This thesis presents a novel inference engine for recognising occluded 3D human motion assisted by the recognition context. First, uncertainties are wrapped into a fuzzy membership function via a novel method called Fuzzy Quantile Generation which employs metrics derived from the probabilistic quantile function. Then, time-dependent and context-aware rules are produced via a genetic programming to smooth the qualitative outputs represented by fuzzy membership functions. Finally, occlusion in motion recognition is taken care of by introducing new procedures for feature selection and feature reconstruction. Experimental results demonstrate the effectiveness of the proposed framework on motion capture data from real boxers in terms of fuzzy membership generation, context-aware rule generation, and motion occlusion. Future work might involve the extension of Fuzzy Quantile Generation in order to automate the choice of a probability distribution, the enhancement of temporal pattern recognition with probabilistic paradigms, the optimisation of the occlusion module, and the adaptation of the present framework to different application domains.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
Advances in Reinforcement Learning
Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic
- âŠ