6,816 research outputs found
The implications of embodiment for behavior and cognition: animal and robotic case studies
In this paper, we will argue that if we want to understand the function of
the brain (or the control in the case of robots), we must understand how the
brain is embedded into the physical system, and how the organism interacts with
the real world. While embodiment has often been used in its trivial meaning,
i.e. 'intelligence requires a body', the concept has deeper and more important
implications, concerned with the relation between physical and information
(neural, control) processes. A number of case studies are presented to
illustrate the concept. These involve animals and robots and are concentrated
around locomotion, grasping, and visual perception. A theoretical scheme that
can be used to embed the diverse case studies will be presented. Finally, we
will establish a link between the low-level sensory-motor processes and
cognition. We will present an embodied view on categorization, and propose the
concepts of 'body schema' and 'forward models' as a natural extension of the
embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of
Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5
Value Iteration Networks on Multiple Levels of Abstraction
Learning-based methods are promising to plan robot motion without performing
extensive search, which is needed by many non-learning approaches. Recently,
Value Iteration Networks (VINs) received much interest since---in contrast to
standard CNN-based architectures---they learn goal-directed behaviors which
generalize well to unseen domains. However, VINs are restricted to small and
low-dimensional domains, limiting their applicability to real-world planning
problems.
To address this issue, we propose to extend VINs to representations with
multiple levels of abstraction. While the vicinity of the robot is represented
in sufficient detail, the representation gets spatially coarser with increasing
distance from the robot. The information loss caused by the decreasing
resolution is compensated by increasing the number of features representing a
cell. We show that our approach is capable of solving significantly larger 2D
grid world planning tasks than the original VIN implementation. In contrast to
a multiresolution coarse-to-fine VIN implementation which does not employ
additional descriptive features, our approach is capable of solving challenging
environments, which demonstrates that the proposed method learns to encode
useful information in the additional features. As an application for solving
real-world planning tasks, we successfully employ our method to plan
omnidirectional driving for a search-and-rescue robot in cluttered terrain
Neuroethology, Computational
Over the past decade, a number of neural network researchers have used the term computational neuroethology to describe a specific approach to neuroethology. Neuroethology is the study of the neural mechanisms underlying the generation of behavior in animals, and hence it lies at the intersection of neuroscience (the study of nervous systems) and ethology (the study of animal behavior); for an introduction to neuroethology, see Simmons and Young (1999). The definition of computational neuroethology is very similar, but is not quite so dependent on studying animals: animals just happen to be biological autonomous agents. But there are also non-biological autonomous agents such as some types of robots, and some types of simulated embodied agents operating in virtual worlds. In this context, autonomous agents are self-governing entities capable of operating (i.e., coordinating perception and action) for extended periods of time in environments that are complex, uncertain, and dynamic. Thus, computational neuroethology can be characterised as the attempt to analyze the computational principles underlying the generation of behavior in animals and in artificial autonomous agents
- …