14,444 research outputs found
A self-learning rule base for command following in dynamical systems
In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules
Adaptive Critic Design Based Neuro-Fuzzy Controller for a Static Compensator in a Multimachine Power System
This paper presents a novel nonlinear optimal controller for a static compensator (STATCOM) connected to a power system, using artificial neural networks and fuzzy logic. The action dependent heuristic dynamic programming, a member of the adaptive Critic designs family, is used for the design of the STATCOM neuro-fuzzy controller. This neuro-fuzzy controller provides optimal control based on reinforcement learning and approximate dynamic programming. Using a proportional-integrator approach the proposed controller is capable of dealing with actual rather than deviation signals. The STATCOM is connected to a multimachine power system. Two multimachine systems are considered in this study: a 10-bus system and a 45-bus network (a section of the Brazilian power system). Simulation results are provided to show that the proposed controller outperforms a conventional PI controller in large scale faults as well as small disturbance
Discrete and fuzzy dynamical genetic programming in the XCSF learning classifier system
A number of representation schemes have been presented for use within
learning classifier systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using discrete and fuzzy
dynamical system representations within the XCSF learning classifier system. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules in the discrete case and
asynchronous fuzzy logic networks in the continuous-valued case. It is shown
possible to use self-adaptive, open-ended evolution to design an ensemble of
such dynamical systems within XCSF to solve a number of well-known test
problems
A Review on Energy Consumption Optimization Techniques in IoT Based Smart Building Environments
In recent years, due to the unnecessary wastage of electrical energy in
residential buildings, the requirement of energy optimization and user comfort
has gained vital importance. In the literature, various techniques have been
proposed addressing the energy optimization problem. The goal of each technique
was to maintain a balance between user comfort and energy requirements such
that the user can achieve the desired comfort level with the minimum amount of
energy consumption. Researchers have addressed the issue with the help of
different optimization algorithms and variations in the parameters to reduce
energy consumption. To the best of our knowledge, this problem is not solved
yet due to its challenging nature. The gap in the literature is due to the
advancements in the technology and drawbacks of the optimization algorithms and
the introduction of different new optimization algorithms. Further, many newly
proposed optimization algorithms which have produced better accuracy on the
benchmark instances but have not been applied yet for the optimization of
energy consumption in smart homes. In this paper, we have carried out a
detailed literature review of the techniques used for the optimization of
energy consumption and scheduling in smart homes. The detailed discussion has
been carried out on different factors contributing towards thermal comfort,
visual comfort, and air quality comfort. We have also reviewed the fog and edge
computing techniques used in smart homes
Learning Agent for a Heat-Pump Thermostat With a Set-Back Strategy Using Model-Free Reinforcement Learning
The conventional control paradigm for a heat pump with a less efficient
auxiliary heating element is to keep its temperature set point constant during
the day. This constant temperature set point ensures that the heat pump
operates in its more efficient heat-pump mode and minimizes the risk of
activating the less efficient auxiliary heating element. As an alternative to a
constant set-point strategy, this paper proposes a learning agent for a
thermostat with a set-back strategy. This set-back strategy relaxes the
set-point temperature during convenient moments, e.g. when the occupants are
not at home. Finding an optimal set-back strategy requires solving a sequential
decision-making process under uncertainty, which presents two challenges. A
first challenge is that for most residential buildings a description of the
thermal characteristics of the building is unavailable and challenging to
obtain. A second challenge is that the relevant information on the state, i.e.
the building envelope, cannot be measured by the learning agent. In order to
overcome these two challenges, our paper proposes an auto-encoder coupled with
a batch reinforcement learning technique. The proposed approach is validated
for two building types with different thermal characteristics for heating in
the winter and cooling in the summer. The simulation results indicate that the
proposed learning agent can reduce the energy consumption by 4-9% during 100
winter days and by 9-11% during 80 summer days compared to the conventional
constant set-point strategyComment: Submitted to Energies - MDPI.co
Generating Interpretable Fuzzy Controllers using Particle Swarm Optimization and Genetic Programming
Autonomously training interpretable control strategies, called policies,
using pre-existing plant trajectory data is of great interest in industrial
applications. Fuzzy controllers have been used in industry for decades as
interpretable and efficient system controllers. In this study, we introduce a
fuzzy genetic programming (GP) approach called fuzzy GP reinforcement learning
(FGPRL) that can select the relevant state features, determine the size of the
required fuzzy rule set, and automatically adjust all the controller parameters
simultaneously. Each GP individual's fitness is computed using model-based
batch reinforcement learning (RL), which first trains a model using available
system samples and subsequently performs Monte Carlo rollouts to predict each
policy candidate's performance. We compare FGPRL to an extended version of a
related method called fuzzy particle swarm reinforcement learning (FPSRL),
which uses swarm intelligence to tune the fuzzy policy parameters. Experiments
using an industrial benchmark show that FGPRL is able to autonomously learn
interpretable fuzzy policies with high control performance.Comment: Accepted at Genetic and Evolutionary Computation Conference 2018
(GECCO '18
- …