33 research outputs found

    Dynamic Self-Organising Map

    Get PDF
    International audienceWe present in this paper a variation of the self-organising map algorithm where the original time-dependent (learning rate and neighbourhood) learning function is replaced by a time-invariant one. This allows for on-line and continuous learning on both static and dynamic data distributions. One of the property of the newly proposed algorithm is that it does not fit the magnification law and the achieved vector density is not directly proportional to the density of the distribution as found in most vector quantisation algorithms. From a biological point of view, this algorithm sheds light on cortical plasticity seen as a dynamic and tight coupling between the environment and the model

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    Image inpainting based on self-organizing maps by using multi-agent implementation

    Get PDF
    AbstractThe image inpainting is a well-known task of visual editing. However, the efficiency strongly depends on sizes and textural neighborhood of “missing” area. Various methods of image inpainting exist, among which the Kohonen Self-Organizing Map (SOM) network as a mean of unsupervised learning is widely used. The weaknesses of the Kohonen SOM network such as the necessity for tuning of algorithm parameters and the low computational speed caused the application of multi- agent system with a multi-mapping possibility and a parallel processing by the identical agents. During experiments, it was shown that the preliminary image segmentation and the creation of the SOMs for each type of homogeneous textures provide better results in comparison with the classical SOM application. Also the optimal number of inpainting agents was determined. The quality of inpainting was estimated by several metrics, and good results were obtained in complex images

    Cortex Inspired Learning to Recover Damaged Signal Modality with ReD-SOM Model

    Full text link
    Recent progress in the fields of AI and cognitive sciences opens up new challenges that were previously inaccessible to study. One of such modern tasks is recovering lost data of one modality by using the data from another one. A similar effect (called the McGurk Effect) has been found in the functioning of the human brain. Observing this effect, one modality of information interferes with another, changing its perception. In this paper, we propose a way to simulate such an effect and use it to reconstruct lost data modalities by combining Variational Auto-Encoders, Self-Organizing Maps, and Hebb connections in a unified ReD-SOM (Reentering Deep Self-organizing Map) model. We are inspired by human's capability to use different zones of the brain in different modalities, in case of having a lack of information in one of the modalities. This new approach not only improves the analysis of ambiguous data but also restores the intended signal! The results obtained on the multimodal dataset demonstrate an increase of quality of the signal reconstruction. The effect is remarkable both visually and quantitatively, specifically in presence of a significant degree of signal's distortion.Comment: 9 pages, 8 images, unofficial version, currently under revie

    The ubiquitous self-organizing map for non-stationary data streams

    Get PDF

    Self-Organizing Dynamic Neural Fields

    Get PDF
    International audienceThis paper presents a one dimensional dynamic neural field that can continuously and dynamically self-organize itself

    Dynamic reservoir for developmental reinforcement learning

    Get PDF
    International audienceWe present in this paper an original neural architecture based on a Dynamic Self-Organizing Map (DSOM). In a reservoir computing paradigm, this architecture is used as a function approximation in a reinforcement learning setting where the state x action space is difficult to handle. The life-long online learning property of the DSOM allows us to take a developmental approach to learning a robotic task: the perception and motor skills of the robot can grow in richness and complexity during learning. As this work is largely in progress, valid and sound results are not yet available.Dans cet article, nous présentons une architecture neuronale qui s'appuie sur une Carte Dynamique Auto-Organisée (Dynamic Self-Organizing Map DSOM). Dans un cadre de réservoir de calcul (reservoir computing), cette architecture est utilisé comme un approximateur de fonction dans un contexte d'apprentissage par renforcement où l'espace d'état-action est difficile à gérer. Les DSOM exhibant de bonne propriétés d'apprentissage continu, cette architecture nous permet de proposer une approche dévelopementale de l'apprentissage de tâches robotiques : les capacités motrices et perceptives du robot s'enrichissent au fur et à mesure que l'apprentissage progresse. Comme ce travail est en cours, des résultats valides et vérifiés ne sont pas encore disponibles
    corecore