6,992 research outputs found
Recommended from our members
Emergence of Sensory Representations Using Prediction in Partially Observable Environments
n order to explore and act autonomously in an environment,an agent can learn from the sensorimotor information that is capturedwhile acting. By extracting the regularities in this sensorimotor stream,it can build a model of the world, which in turn can be used as a basis foraction and exploration. It requires the acquisition of compact representa-tions from possibly high dimensional raw observations. In this paper, wepropose a model which integrates sensorimotor information over time,and project it in a sensory representation. It is trained by preformingsensorimotor prediction. We emphasize on a simple example the role ofmotor and memory for learning sensory representations
Recommended from our members
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatlandis a simple, lightweight environment for fastprototyping and testing of reinforcement learning agents. It is oflower complexity compared to similar 3D platforms (e.g. Deep-Mind Lab or VizDoom), but emulates physical properties of thereal world, such as continuity, multi-modal partially-observablestates with first-person view and coherent physics. We proposeto use it as an intermediary benchmark for problems related toLifelong Learning.Flatlandis highly customizable and offers awide range of task difficulty to extensively evaluate the propertiesof artificial agents. We experiment with three reinforcementlearning baseline agents and show that they can rapidly solvea navigation task inFlatland. A video of an agent acting inFlatlandis available here: https://youtu.be/I5y6Y2ZypdA
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatland is a simple, lightweight environment for fast prototyping and
testing of reinforcement learning agents. It is of lower complexity compared to
similar 3D platforms (e.g. DeepMind Lab or VizDoom), but emulates physical
properties of the real world, such as continuity, multi-modal
partially-observable states with first-person view and coherent physics. We
propose to use it as an intermediary benchmark for problems related to Lifelong
Learning. Flatland is highly customizable and offers a wide range of task
difficulty to extensively evaluate the properties of artificial agents. We
experiment with three reinforcement learning baseline agents and show that they
can rapidly solve a navigation task in Flatland. A video of an agent acting in
Flatland is available here: https://youtu.be/I5y6Y2ZypdA.Comment: Accepted to the Workshop on Continual Unsupervised Sensorimotor
Learning (ICDL-EpiRob 2018
Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks
Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown
distinct advantages, e.g., solving memory-dependent tasks and meta-learning.
However, little effort has been spent on improving RNN architectures and on
understanding the underlying neural mechanisms for performance gain. In this
paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical
results show that the network can autonomously learn to abstract sub-goals and
can self-develop an action hierarchy using internal dynamics in a challenging
continuous control task. Furthermore, we show that the self-developed
compositionality of the network enhances faster re-learning when adapting to a
new task that is a re-composition of previously learned sub-goals, than when
starting from scratch. We also found that improved performance can be achieved
when neural activities are subject to stochastic rather than deterministic
dynamics
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Recommended from our members
Flatland: a Lightweight First-Person 2-D Environment for Reinforcement Learning
Flatlandis a simple, lightweight environment for fastprototyping and testing of reinforcement learning agents. It is oflower complexity compared to similar 3D platforms (e.g. Deep-Mind Lab or VizDoom), but emulates physical properties of thereal world, such as continuity, multi-modal partially-observablestates with first-person view and coherent physics. We proposeto use it as an intermediary benchmark for problems related toLifelong Learning.Flatlandis highly customizable and offers awide range of task difficulty to extensively evaluate the propertiesof artificial agents. We experiment with three reinforcementlearning baseline agents and show that they can rapidly solvea navigation task inFlatland. A video of an agent acting inFlatlandis available here: https://youtu.be/I5y6Y2ZypdA
Integrating Symbolic and Neural Processing in a Self-Organizing Architechture for Pattern Recognition and Prediction
British Petroleum (89A-1204); Defense Advanced Research Projects Agency (N00014-92-J-4015); National Science Foundation (IRI-90-00530); Office of Naval Research (N00014-91-J-4100); Air Force Office of Scientific Research (F49620-92-J-0225
Causal connectivity of evolved neural networks during behavior
To show how causal interactions in neural dynamics are modulated by behavior, it is valuable to analyze these interactions without perturbing or lesioning the neural mechanism. This paper proposes a method, based on a graph-theoretic extension of vector autoregressive modeling and 'Granger causality,' for characterizing causal interactions generated within intact neural mechanisms. This method, called 'causal connectivity analysis' is illustrated via model neural networks optimized for controlling target fixation in a simulated head-eye system, in which the structure of the environment can be experimentally varied. Causal connectivity analysis of this model yields novel insights into neural mechanisms underlying sensorimotor coordination. In contrast to networks supporting comparatively simple behavior, networks supporting rich adaptive behavior show a higher density of causal interactions, as well as a stronger causal flow from sensory inputs to motor outputs. They also show different arrangements of 'causal sources' and 'causal sinks': nodes that differentially affect, or are affected by, the remainder of the network. Finally, analysis of causal connectivity can predict the functional consequences of network lesions. These results suggest that causal connectivity analysis may have useful applications in the analysis of neural dynamics
World model learning and inference
Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world
- …