10,128 research outputs found
Distilling Deep RL Models Into Interpretable Neuro-Fuzzy Systems
Deep Reinforcement Learning uses a deep neural network to encode a policy,
which achieves very good performance in a wide range of applications but is
widely regarded as a black box model. A more interpretable alternative to deep
networks is given by neuro-fuzzy controllers. Unfortunately, neuro-fuzzy
controllers often need a large number of rules to solve relatively simple
tasks, making them difficult to interpret. In this work, we present an
algorithm to distill the policy from a deep Q-network into a compact
neuro-fuzzy controller. This allows us to train compact neuro-fuzzy controllers
through distillation to solve tasks that they are unable to solve directly,
combining the flexibility of deep reinforcement learning and the
interpretability of compact rule bases. We demonstrate the algorithm on three
well-known environments from OpenAI Gym, where we nearly match the performance
of a DQN agent using only 2 to 6 fuzzy rules
PAC: A Novel Self-Adaptive Neuro-Fuzzy Controller for Micro Aerial Vehicles
There exists an increasing demand for a flexible and computationally
efficient controller for micro aerial vehicles (MAVs) due to a high degree of
environmental perturbations. In this work, an evolving neuro-fuzzy controller,
namely Parsimonious Controller (PAC) is proposed. It features fewer network
parameters than conventional approaches due to the absence of rule premise
parameters. PAC is built upon a recently developed evolving neuro-fuzzy system
known as parsimonious learning machine (PALM) and adopts new rule growing and
pruning modules derived from the approximation of bias and variance. These rule
adaptation methods have no reliance on user-defined thresholds, thereby
increasing the PAC's autonomy for real-time deployment. PAC adapts the
consequent parameters with the sliding mode control (SMC) theory in the
single-pass fashion. The boundedness and convergence of the closed-loop control
system's tracking error and the controller's consequent parameters are
confirmed by utilizing the LaSalle-Yoshizawa theorem. Lastly, the controller's
efficacy is evaluated by observing various trajectory tracking performance from
a bio-inspired flapping-wing micro aerial vehicle (BI-FWMAV) and a rotary wing
micro aerial vehicle called hexacopter. Furthermore, it is compared to three
distinctive controllers. Our PAC outperforms the linear PID controller and
feed-forward neural network (FFNN) based nonlinear adaptive controller.
Compared to its predecessor, G-controller, the tracking accuracy is comparable,
but the PAC incurs significantly fewer parameters to attain similar or better
performance than the G-controller.Comment: This paper has been accepted for publication in Information Science
Journal 201
Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming
Autonomously training interpretable control strategies, called policies,
using pre-existing plant trajectory data is of great interest in industrial
applications. Fuzzy controllers have been used in industry for decades as
interpretable and efficient system controllers. In this study, we introduce a
fuzzy genetic programming (GP) approach called fuzzy GP reinforcement learning
(FGPRL) that can select the relevant state features, determine the size of the
required fuzzy rule set, and automatically adjust all the controller parameters
simultaneously. Each GP individual's fitness is computed using model-based
batch reinforcement learning (RL), which first trains a model using available
system samples and subsequently performs Monte Carlo rollouts to predict each
policy candidate's performance. We compare FGPRL to an extended version of a
related method called fuzzy particle swarm reinforcement learning (FPSRL),
which uses swarm intelligence to tune the fuzzy policy parameters. Experiments
using an industrial benchmark show that FGPRL is able to autonomously learn
interpretable fuzzy policies with high control performance.Comment: Accepted at Genetic and Evolutionary Computation Conference 2018
(GECCO '18
Learning and tuning fuzzy logic controllers through reinforcements
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing
Theoretical Interpretations and Applications of Radial Basis Function Networks
Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains
Using Building Blocks to Design Analog Neuro-Fuzzy Controllers
We present a parallel architecture for fuzzy controllers and a methodology for their realization as analog CMOS chips for low- and medium-precision applications. These chips can be made to learn through the adaptation of electrically controllable parameters guided by a dedicated hardware-compatible learning algorithm. Our designs emphasize simplicity at the circuit level—a prerequisite for increasing processor complexity and operation speed. Examples include a three-input, four-rule controller chip in 1.5-μm CMOS, single-poly, double-metal technology
Intelligent control based on fuzzy logic and neural net theory
In the conception and design of intelligent systems, one promising direction involves the use of fuzzy logic and neural network theory to enhance such systems' capability to learn from experience and adapt to changes in an environment of uncertainty and imprecision. Here, an intelligent control scheme is explored by integrating these multidisciplinary techniques. A self-learning system is proposed as an intelligent controller for dynamical processes, employing a control policy which evolves and improves automatically. One key component of the intelligent system is a fuzzy logic-based system which emulates human decision making behavior. It is shown that the system can solve a fairly difficult control learning problem. Simulation results demonstrate that improved learning performance can be achieved in relation to previously described systems employing bang-bang control. The proposed system is relatively insensitive to variations in the parameters of the system environment
- …