15,410 research outputs found
Multimodal interactions in insect navigation
Animals travelling through the world receive input from multiple sensory modalities that could be important for the guidance of their journeys. Given the availability of a rich array of cues, from idiothetic information to input from sky compasses and visual information through to olfactory and other cues (e.g. gustatory, magnetic, anemotactic or thermal) it is no surprise to see multimodality in most aspects of navigation. In this review, we present the current knowledge of multimodal cue use during orientation and navigation in insects. Multimodal cue use is adapted to a species’ sensory ecology and shapes navigation behaviour both during the learning of environmental cues and when performing complex foraging journeys. The simultaneous use of multiple cues is beneficial because it provides redundant navigational information, and in general, multimodality increases robustness, accuracy and overall foraging success. We use examples from sensorimotor behaviours in mosquitoes and flies as well as from large scale navigation in ants, bees and insects that migrate seasonally over large distances, asking at each stage how multiple cues are combined behaviourally and what insects gain from using different modalities
Crossmodal Attentive Skill Learner
This paper presents the Crossmodal Attentive Skill Learner (CASL), integrated
with the recently-introduced Asynchronous Advantage Option-Critic (A2OC)
architecture [Harb et al., 2017] to enable hierarchical reinforcement learning
across multiple sensory inputs. We provide concrete examples where the approach
not only improves performance in a single task, but accelerates transfer to new
tasks. We demonstrate the attention mechanism anticipates and identifies useful
latent features, while filtering irrelevant sensor modalities during execution.
We modify the Arcade Learning Environment [Bellemare et al., 2013] to support
audio queries, and conduct evaluations of crossmodal learning in the Atari 2600
game Amidar. Finally, building on the recent work of Babaeizadeh et al. [2017],
we open-source a fast hybrid CPU-GPU implementation of CASL.Comment: International Conference on Autonomous Agents and Multiagent Systems
(AAMAS) 2018, NIPS 2017 Deep Reinforcement Learning Symposiu
Machine Understanding of Human Behavior
A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior
SERKET: An Architecture for Connecting Stochastic Models to Realize a Large-Scale Cognitive Model
To realize human-like robot intelligence, a large-scale cognitive
architecture is required for robots to understand the environment through a
variety of sensors with which they are equipped. In this paper, we propose a
novel framework named Serket that enables the construction of a large-scale
generative model and its inference easily by connecting sub-modules to allow
the robots to acquire various capabilities through interaction with their
environments and others. We consider that large-scale cognitive models can be
constructed by connecting smaller fundamental models hierarchically while
maintaining their programmatic independence. Moreover, connected modules are
dependent on each other, and parameters are required to be optimized as a
whole. Conventionally, the equations for parameter estimation have to be
derived and implemented depending on the models. However, it becomes harder to
derive and implement those of a larger scale model. To solve these problems, in
this paper, we propose a method for parameter estimation by communicating the
minimal parameters between various modules while maintaining their programmatic
independence. Therefore, Serket makes it easy to construct large-scale models
and estimate their parameters via the connection of modules. Experimental
results demonstrated that the model can be constructed by connecting modules,
the parameters can be optimized as a whole, and they are comparable with the
original models that we have proposed
Self-Supervised Vision-Based Detection of the Active Speaker as Support for Socially-Aware Language Acquisition
This paper presents a self-supervised method for visual detection of the
active speaker in a multi-person spoken interaction scenario. Active speaker
detection is a fundamental prerequisite for any artificial cognitive system
attempting to acquire language in social settings. The proposed method is
intended to complement the acoustic detection of the active speaker, thus
improving the system robustness in noisy conditions. The method can detect an
arbitrary number of possibly overlapping active speakers based exclusively on
visual information about their face. Furthermore, the method does not rely on
external annotations, thus complying with cognitive development. Instead, the
method uses information from the auditory modality to support learning in the
visual domain. This paper reports an extensive evaluation of the proposed
method using a large multi-person face-to-face interaction dataset. The results
show good performance in a speaker dependent setting. However, in a speaker
independent setting the proposed method yields a significantly lower
performance. We believe that the proposed method represents an essential
component of any artificial cognitive system or robotic platform engaging in
social interactions.Comment: 10 pages, IEEE Transactions on Cognitive and Developmental System
An Efficient Threshold-Driven Aggregate-Label Learning Algorithm for Multimodal Information Processing
The aggregate-label learning paradigm tackles the long-standing temporary credit assignment (TCA) problem in neuroscience and machine learning, enabling spiking neural networks to learn multimodal sensory clues with delayed feedback signals. However, the existing aggregate-label learning algorithms only work for single spiking neurons, and with low learning efficiency, which limit their real-world applicability. To address these limitations, we first propose an efficient threshold-driven plasticity algorithm for spiking neurons, namely ETDP. It enables spiking neurons to generate the desired number of spikes that match the magnitude of delayed feedback signals and to learn useful multimodal sensory clues embedded within spontaneous spiking activities. Furthermore, we extend the ETDP algorithm to support multi-layer spiking neural networks (SNNs), which significantly improves the applicability of aggregate-label learning algorithms. We also validate the multi-layer ETDP learning algorithm in a multimodal computation framework for audio-visual pattern recognition. Experimental results on both synthetic and realistic datasets show significant improvements in the learning efficiency and model capacity over the existing aggregate-label learning algorithms. It, therefore, provides many opportunities for solving real-world multimodal pattern recognition tasks with spiking neural networks
- …