19,279 research outputs found
Place cognition and active perception: a study with evolved robots
A study of place cognition and 'place units' in robots produced via artificial evolution is described. Previous studies have investigated the possible role of place cells as building blocks for 'cognitive maps' representing place, distance and direction. Studies also show, however, that when animals are restrained, the spatial selectivity of place cells is partially or completely lost. This suggests that the role of place cells in spatial cognition depends not only on the place cells themselves, but also on representations of the animal's physical interactions with its environment. This hypothesis is tested in a population of evolved robots. The results suggest that successful place cognition requires not only the ability to process spatial information, but also the ability to select the environmental stimuli to which the agent is exposed. If this is so, theories of active perception can make a useful contribution to explaining the role of place cells in spatial cognition
Using robots to understand animal cognition
In recent years, robotic animals and humans have been used to answer a variety of questions related to behavior. In the case of animal behavior, these efforts have largely been in the field of behavioral ecology. They have proved to be a useful tool for this enterprise as they allow the presentation of naturalistic social stimuli whilst providing the experimenter with full control of the stimulus. In interactive experiments, the behavior of robots can be controlled in a manner that is impossible with real animals, making them ideal instruments for the study of social stimuli in animals. This paper provides an overview of the current state of the field and considers the impact that the use of robots could have on fundamental questions related to comparative psychology: namely, perception, spatial cognition, social cognition, and early cognitive development. We make the case that the use of robots to investigate these key areas could have an important impact on the field of animal cognition
The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling
Grounding Dynamic Spatial Relations for Embodied (Robot) Interaction
This paper presents a computational model of the processing of dynamic
spatial relations occurring in an embodied robotic interaction setup. A
complete system is introduced that allows autonomous robots to produce and
interpret dynamic spatial phrases (in English) given an environment of moving
objects. The model unites two separate research strands: computational
cognitive semantics and on commonsense spatial representation and reasoning.
The model for the first time demonstrates an integration of these different
strands.Comment: in: Pham, D.-N. and Park, S.-B., editors, PRICAI 2014: Trends in
Artificial Intelligence, volume 8862 of Lecture Notes in Computer Science,
pages 958-971. Springe
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ďŹndings
in cognitive psychology, our model is composed of layers representing maps at diďŹerent levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Recommended from our members
A corpus-based analysis of route instructions in human-robot interaction
This paper investigates how users employ spatial descriptions to navigate a speech-enabled robot. We created a simulated environment in which users gave route instructions in a dialogic real-time interaction with a robot, which was
operated by naĂŻve participants. The ability of robot monitoring was also manipulated in two experimental conditions. The results provide evidence that the content of the instructions and strategies of the users vary depending on the conditions and
demands of the interaction. As expected, the route instructions frequently were underspecified and arbitrary. The findings of
this study elucidate the complexity in interpreting spatial language in HRI. However, they also point to the need for
endowing mobile robots with richer dialogue resources to compensate for the uncertainties arising from language as well
as the environment
Effects of spatial ability on multi-robot control tasks
Working with large teams of robots is a very complex and demanding task for any operator and individual differences in spatial ability could significantly affect that performance. In the present study, we examine data from two earlier experiments to investigate the effects of ability for perspective-taking on performance at an urban search and rescue (USAR) task using a realistic simulation and alternate displays. We evaluated the participants' spatial ability using a standard measure of spatial orientation and examined the divergence of performance in accuracy and speed in locating victims, and perceived workload. Our findings show operators with higher spatial ability experienced less workload and marked victims more precisely. An interaction was found for the experimental image queue display for which participants with low spatial ability improved significantly in their accuracy in marking victims over the traditional streaming video display. Copyright 2011 by Human Factors and Ergonomics Society, Inc. All rights reserved
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
- âŚ