40,951 research outputs found
Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks
Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown
distinct advantages, e.g., solving memory-dependent tasks and meta-learning.
However, little effort has been spent on improving RNN architectures and on
understanding the underlying neural mechanisms for performance gain. In this
paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical
results show that the network can autonomously learn to abstract sub-goals and
can self-develop an action hierarchy using internal dynamics in a challenging
continuous control task. Furthermore, we show that the self-developed
compositionality of the network enhances faster re-learning when adapting to a
new task that is a re-composition of previously learned sub-goals, than when
starting from scratch. We also found that improved performance can be achieved
when neural activities are subject to stochastic rather than deterministic
dynamics
Minimum Energy Information Fusion in Sensor Networks
In this paper we consider how to organize the sharing of information in a
distributed network of sensors and data processors so as to provide
explanations for sensor readings with minimal expenditure of energy. We point
out that the Minimum Description Length principle provides an approach to
information fusion that is more naturally suited to energy minimization than
traditional Bayesian approaches. In addition we show that for networks
consisting of a large number of identical sensors Kohonen self-organization
provides an exact solution to the problem of combining the sensor outputs into
minimal description length explanations.Comment: postscript, 8 pages. Paper 65 in Proceedings of The 2nd International
Conference on Information Fusio
Learning a world model and planning with a self-organizing, dynamic neural system
We present a connectionist architecture that can learn a model of the
relations between perceptions and actions and use this model for behavior
planning. State representations are learned with a growing self-organizing
layer which is directly coupled to a perception and a motor layer. Knowledge
about possible state transitions is encoded in the lateral connectivity. Motor
signals modulate this lateral connectivity and a dynamic field on the layer
organizes a planning process. All mechanisms are local and adaptation is based
on Hebbian ideas. The model is continuous in the action, perception, and time
domain.Comment: 9 pages, see http://www.marc-toussaint.net
Evolutionary Neural Gas (ENG): A Model of Self Organizing Network from Input Categorization
Despite their claimed biological plausibility, most self organizing networks
have strict topological constraints and consequently they cannot take into
account a wide range of external stimuli. Furthermore their evolution is
conditioned by deterministic laws which often are not correlated with the
structural parameters and the global status of the network, as it should happen
in a real biological system. In nature the environmental inputs are noise
affected and fuzzy. Which thing sets the problem to investigate the possibility
of emergent behaviour in a not strictly constrained net and subjected to
different inputs. It is here presented a new model of Evolutionary Neural Gas
(ENG) with any topological constraints, trained by probabilistic laws depending
on the local distortion errors and the network dimension. The network is
considered as a population of nodes that coexist in an ecosystem sharing local
and global resources. Those particular features allow the network to quickly
adapt to the environment, according to its dimensions. The ENG model analysis
shows that the net evolves as a scale-free graph, and justifies in a deeply
physical sense- the term gas here used.Comment: 16 pages, 8 figure
Cognition-Based Networks: A New Perspective on Network Optimization Using Learning and Distributed Intelligence
IEEE Access
Volume 3, 2015, Article number 7217798, Pages 1512-1530
Open Access
Cognition-based networks: A new perspective on network optimization using learning and distributed intelligence (Article)
Zorzi, M.a , Zanella, A.a, Testolin, A.b, De Filippo De Grazia, M.b, Zorzi, M.bc
a Department of Information Engineering, University of Padua, Padua, Italy
b Department of General Psychology, University of Padua, Padua, Italy
c IRCCS San Camillo Foundation, Venice-Lido, Italy
View additional affiliations
View references (107)
Abstract
In response to the new challenges in the design and operation of communication networks, and taking inspiration from how living beings deal with complexity and scalability, in this paper we introduce an innovative system concept called COgnition-BAsed NETworkS (COBANETS). The proposed approach develops around the systematic application of advanced machine learning techniques and, in particular, unsupervised deep learning and probabilistic generative models for system-wide learning, modeling, optimization, and data representation. Moreover, in COBANETS, we propose to combine this learning architecture with the emerging network virtualization paradigms, which make it possible to actuate automatic optimization and reconfiguration strategies at the system level, thus fully unleashing the potential of the learning approach. Compared with the past and current research efforts in this area, the technical approach outlined in this paper is deeply interdisciplinary and more comprehensive, calling for the synergic combination of expertise of computer scientists, communications and networking engineers, and cognitive scientists, with the ultimate aim of breaking new ground through a profound rethinking of how the modern understanding of cognition can be used in the management and optimization of telecommunication network
A new self-organizing neural gas model based on Bregman divergences
In this paper, a new self-organizing neural gas model that we call Growing Hierarchical Bregman Neural
Gas (GHBNG) has been proposed. Our proposal is based on the Growing Hierarchical Neural Gas (GHNG) in which Bregman divergences are incorporated in order to compute the winning neuron. This model has been applied to anomaly detection in video sequences together with a Faster R-CNN as an object detector module. Experimental results not only confirm the effectiveness of the GHBNG for the detection of anomalous object in video sequences but also its selforganization
capabilities.Universidad de Málaga. Campus de Excelencia Internacional AndalucĂa Tec
- …