879 research outputs found
Learning cognitive maps: Finding useful structure in an uncertain world
In this chapter we will describe the central mechanisms that influence how people learn about large-scale space. We will focus particularly on how these mechanisms enable people to effectively cope with both the uncertainty inherent in a constantly changing world and also with the high information content of natural environments. The major lessons are that humans get by with a less is more approach to building structure, and that they are able to quickly adapt to environmental changes thanks to a range of general purpose mechanisms. By looking at abstract principles, instead of concrete implementation details, it is shown that the study of human learning can provide valuable lessons for robotics. Finally, these issues are discussed in the context of an implementation on a mobile robot. © 2007 Springer-Verlag Berlin Heidelberg
Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots
Building a humanlike integrative artificial cognitive system, that is, an
artificial general intelligence, is one of the goals in artificial intelligence
and developmental robotics. Furthermore, a computational model that enables an
artificial cognitive system to achieve cognitive development will be an
excellent reference for brain and cognitive science. This paper describes the
development of a cognitive architecture using probabilistic generative models
(PGMs) to fully mirror the human cognitive system. The integrative model is
called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In
this paper, the process of building the WB-PGM and learning from the human
brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network
Adaptive and intelligent navigation of autonomous planetary rovers - A survey
The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors
Exploiting semantic information in a spiking neural SLAM system
To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM—a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM
Appearance-based indoor localization: a comparison of patch descriptor performance
Vision is one of the most important of the senses, and humans use it
extensively during navigation. We evaluated different types of image and video
frame descriptors that could be used to determine distinctive visual landmarks
for localizing a person based on what is seen by a camera that they carry. To
do this, we created a database containing over 3 km of video-sequences with
ground-truth in the form of distance travelled along different corridors. Using
this database, the accuracy of localization - both in terms of knowing which
route a user is on - and in terms of position along a certain route, can be
evaluated. For each type of descriptor, we also tested different techniques to
encode visual structure and to search between journeys to estimate a user's
position. The techniques include single-frame descriptors, those using
sequences of frames, and both colour and achromatic descriptors. We found that
single-frame indexing worked better within this particular dataset. This might
be because the motion of the person holding the camera makes the video too
dependent on individual steps and motions of one particular journey. Our
results suggest that appearance-based information could be an additional source
of navigational data indoors, augmenting that provided by, say, radio signal
strength indicators (RSSIs). Such visual information could be collected by
crowdsourcing low-resolution video feeds, allowing journeys made by different
users to be associated with each other, and location to be inferred without
requiring explicit mapping. This offers a complementary approach to methods
based on simultaneous localization and mapping (SLAM) algorithms.Comment: Accepted for publication on Pattern Recognition Letter
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
Deep Causal Learning for Robotic Intelligence
This invited review discusses causal learning in the context of robotic
intelligence. The paper introduced the psychological findings on causal
learning in human cognition, then it introduced the traditional statistical
solutions on causal discovery and causal inference. The paper reviewed recent
deep causal learning algorithms with a focus on their architectures and the
benefits of using deep nets and discussed the gap between deep causal learning
and the needs of robotic intelligence
- …