6 research outputs found
Towards Real-World Neurorobotics: Integrated Neuromorphic Visual Attention
Neural Information Processing: 21st International Conference, ICONIP 2014, Kuching, Malaysia, November 3-6, 2014. Proceedings, Part IIINeuromorphic hardware and cognitive robots seem like an obvious fit,
yet progress to date has been frustrated by a lack of tangible progress in achieving
useful real-world behaviour. System limitations: the simple and usually proprietary
nature of neuromorphic and robotic platforms, have often been the fundamental
barrier. Here we present an integration of a mature “neuromimetic” chip,
SpiNNaker, with the humanoid iCub robot using a direct AER - address-event
representation - interface that overcomes the need for complex proprietary protocols
by sending information as UDP-encoded spikes over an Ethernet link. Using
an existing neural model devised for visual object selection, we enable the robot
to perform a real-world task: fixating attention upon a selected stimulus. Results
demonstrate the effectiveness of interface and model in being able to control the
robot towards stimulus-specific object selection. Using SpiNNaker as an embeddable
neuromorphic device illustrates the importance of two design features in a
prospective neurorobot: universal configurability that allows the chip to be conformed
to the requirements of the robot rather than the other way ’round, and stan-
dard interfaces that eliminate difficult low-level issues of connectors, cabling,
signal voltages, and protocols. While this study is only a building block towards
that goal, the iCub-SpiNNaker system demonstrates a path towards meaningful
behaviour in robots controlled by neural network chips
Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network
© 2018 The Authors Recent advances in behavioural and computational neuroscience, cognitive robotics, and in the hardware implementation of large-scale neural networks, provide the opportunity for an accelerated understanding of brain functions and for the design of interactive robotic systems based on brain-inspired control systems. This is especially the case in the domain of action and language learning, given the significant scientific and technological developments in this field. In this work we describe how a neuroanatomically grounded spiking neural network for visual attention has been extended with a word learning capability and integrated with the iCub humanoid robot to demonstrate attention-led object naming. Experiments were carried out with both a simulated and a real iCub robot platform with successful results. The iCub robot is capable of associating a label to an object with a ‘preferred’ orientation when visual and word stimuli are presented concurrently in the scene, as well as attending to said object, thus naming it. After learning is complete, the name of the object can be recalled successfully when only the visual input is present, even when the object has been moved from its original position or when other objects are present as distractors
Developmental learning of internal models for robotics
Abstract: Robots that operate in human environments can learn motor skills asocially, from selfexploration, or socially, from imitating their peers. A robot capable of doing both can be more ~daptiveand autonomous. Learning by imitation, however, requires the ability to understand the actions ofothers in terms ofyour own motor system: this information can come from a robot's own exploration. This thesis investigates the minimal requirements for a robotic system than learns from both self-exploration and imitation of others. .Through self.exploration and computer vision techniques, a robot can develop forward 'models: internal mo'dels of its own motor system that enable it to predict the consequences of its actions. Multiple forward models are learnt that give the robot a distributed, causal representation of its motor system. It is demon~trated how a controlled increase in the complexity of these forward models speeds up the robot's learning. The robot can determine the uncertainty of its forward models, enabling it to explore so as to improve the accuracy of its???????predictions. Paying attention fO the forward models according to how their uncertainty is changing leads to a development in the robot's exploration: its interventions focus on increasingly difficult situations, adapting to the complexity of its motor system. A robot can invert forward models, creating inverse models, in order to estimate the actions that will achieve a desired goal. Switching to socialleaming. the robot uses these inverse model~ to imitate both a demonstrator's gestures and the underlying goals of their movement.Imperial Users onl
Neural Learning of Embodied Interaction Dynamics
This paper presents our approach towards realizing a robot which can bootstrap itself towards higher complexity through embodied interaction dynamics with the environment including other agents. First, the elements of interaction dynamics are extracted from conceptual analysis of embodied interaction and its emergence, especially, of behavioral imitation. Then three case studies are made, presenting our neural architecture and the robotic experiments on some of the important elements discussed above: Self exploration and entrainment, emergent coordination, and categorizing self behavior. Finally we propose that integrating all these elements will be the important step towards realizing the bootstrapping agent envisaged above. Keywords: Embodiment, Dynamical Systems, Imitation, Development, Spiking Neurons, Emergence, Exploration, Categorization, Coordination, Active Vision. 1 Introduction Behavior is attributed by an observer to the dynamics of a coupled agent-environment system. The..