3 research outputs found

    The Head Turning Modulation System: An Active Multimodal Paradigm for Intrinsically Motivated Exploration of Unknown Environments

    Get PDF
    Over the last 20 years, a significant part of the research in exploratory robotics partially switches from looking for the most efficient way of exploring an unknown environment to finding what could motivate a robot to autonomously explore it. Moreover, a growing literature focuses not only on the topological description of a space (dimensions, obstacles, usable paths, etc.) but rather on more semantic components, such as multimodal objects present in it. In the search of designing robots that behave autonomously by embedding life-long learning abilities, the inclusion of mechanisms of attention is of importance. Indeed, be it endogenous or exogenous, attention constitutes a form of intrinsic motivation for it can trigger motor command toward specific stimuli, thus leading to an exploration of the space. The Head Turning Modulation model presented in this paper is composed of two modules providing a robot with two different forms of intrinsic motivations leading to triggering head movements toward audiovisual sources appearing in unknown environments. First, the Dynamic Weighting module implements a motivation by the concept of Congruence, a concept defined as an adaptive form of semantic saliency specific for each explored environment. Then, the Multimodal Fusion and Inference module implements a motivation by the reduction of Uncertainty through a self-supervised online learning algorithm that can autonomously determine local consistencies. One of the novelty of the proposed model is to solely rely on semantic inputs (namely audio and visual labels the sources belong to), in opposition to the traditional analysis of the low-level characteristics of the perceived data. Another contribution is found in the way the exploration is exploited to actively learn the relationship between the visual and auditory modalities. Importantly, the robot—endowed with binocular vision, binaural audition and a rotating head—does not have access to prior information about the different environments it will explore. Consequently, it will have to learn in real-time what audiovisual objects are of “importance” in order to rotate its head toward them. Results presented in this paper have been obtained in simulated environments as well as with a real robot in realistic experimental conditions

    Multimodal feedforward self-organizing maps

    No full text
    We introduce a novel system of interconnected Self- Organizing Maps that can be used to build feedforward and recurrent networks of maps. Prime application of interconnected maps is in modelling systems that operate with multimodal data as for example in visual and auditory cortices and multimodal association areas in cortex. A detailed example of animal categorization in which the feedworward network of self-organizing maps is employed is presented. In the example we operate with 18-dimensional data projected up on the 19-dimensional hyper-sphere so that the “dot-product” learning law can be used. One potential benefit of the multimodal map is that it allows a rich structure of parallel unimodal processing with many maps involved, followed by convergence into multimodal maps. More complex stimuli can therefore be processed without a growing map size.Validerad; 2005; 20100325 (andbra
    corecore