4,280 research outputs found
Discrete and fuzzy dynamical genetic programming in the XCSF learning classifier system
A number of representation schemes have been presented for use within
learning classifier systems, ranging from binary encodings to neural networks.
This paper presents results from an investigation into using discrete and fuzzy
dynamical system representations within the XCSF learning classifier system. In
particular, asynchronous random Boolean networks are used to represent the
traditional condition-action production system rules in the discrete case and
asynchronous fuzzy logic networks in the continuous-valued case. It is shown
possible to use self-adaptive, open-ended evolution to design an ensemble of
such dynamical systems within XCSF to solve a number of well-known test
problems
Real-time support for high performance aircraft operation
The feasibility of real-time processing schemes using artificial neural networks (ANNs) is investigated. A rationale for digital neural nets is presented and a general processor architecture for control applications is illustrated. Research results on ANN structures for real-time applications are given. Research results on ANN algorithms for real-time control are also shown
Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks
Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown
distinct advantages, e.g., solving memory-dependent tasks and meta-learning.
However, little effort has been spent on improving RNN architectures and on
understanding the underlying neural mechanisms for performance gain. In this
paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical
results show that the network can autonomously learn to abstract sub-goals and
can self-develop an action hierarchy using internal dynamics in a challenging
continuous control task. Furthermore, we show that the self-developed
compositionality of the network enhances faster re-learning when adapting to a
new task that is a re-composition of previously learned sub-goals, than when
starting from scratch. We also found that improved performance can be achieved
when neural activities are subject to stochastic rather than deterministic
dynamics
Emergent Computations in Trained Artificial Neural Networks and Real Brains
Synaptic plasticity allows cortical circuits to learn new tasks and to adapt
to changing environments. How do cortical circuits use plasticity to acquire
functions such as decision-making or working memory? Neurons are connected in
complex ways, forming recurrent neural networks, and learning modifies the
strength of their connections. Moreover, neurons communicate emitting brief
discrete electric signals. Here we describe how to train recurrent neural
networks in tasks like those used to train animals in neuroscience
laboratories, and how computations emerge in the trained networks.
Surprisingly, artificial networks and real brains can use similar computational
strategies.Comment: International Summer School on Intelligent Signal Processing for
Frontier Research and Industry, INFIERI 2021. Universidad Aut\'onoma de
Madrid, Madrid, Spain. 23 August - 4 September 202
- …