110,338 research outputs found

    An Architecture for Emotional and Context-Aware Associative Learning for Robot Companions

    Get PDF
    This work proposes a theoretical architectural model based on the brain's fear learning system with the purpose of generating artificial fear conditioning at both stimuli and context abstraction levels in robot companions. The proposed architecture is inspired by the different brain regions involved in fear learning, here divided into four modules that work in an integrated and parallel manner: the sensory system, the amygdala system, the hippocampal system and the working memory. Each of these modules is based on a different approach and performs a different task in the process of learning and memorizing environmental cues to predict the occurrence of unpleasant situations. The main contribution of the model proposed here is the integration of fear learning and context awareness in order to fuse emotional and contextual artificial memories. The purpose is to provide robots with more believable social responses, leading to more natural interactions between humans and robots

    Dr. Claude F. Touzet: cognitive science and artificial neural networks

    No full text
    International audienceClaude F. Touzet is an associate-professor at the University of Provence, Marseilles, specialized in Cognitives Sciences. He is a recognized specialist in artificial neural networks and their applications to the industry. Dr. Touzet authored two research papers published in the International Journal of Advanced Robotic Systems (IJARS), “Distributed Lazy Q-Learning for Cooperative Mobile Robots” (2004), and “Modeling and Simulation of Elementary Robot Behaviors Using Associative Memories” (2006)

    Prediction error-driven memory consolidation for continual learning. On the case of adaptive greenhouse models

    Full text link
    This work presents an adaptive architecture that performs online learning and faces catastrophic forgetting issues by means of episodic memories and prediction-error driven memory consolidation. In line with evidences from the cognitive science and neuroscience, memories are retained depending on their congruency with the prior knowledge stored in the system. This is estimated in terms of prediction error resulting from a generative model. Moreover, this AI system is transferred onto an innovative application in the horticulture industry: the learning and transfer of greenhouse models. This work presents a model trained on data recorded from research facilities and transferred to a production greenhouse.Comment: Revised version. Paper under review, submitted to Springer German Journal on Artificial Intelligence (K\"unstliche Intelligenz), Special Issue on Developmental Robotic

    Towards Lifelong Reasoning with Sparse and Compressive Memory Systems

    Get PDF
    Humans have a remarkable ability to remember information over long time horizons. When reading a book, we build up a compressed representation of the past narrative, such as the characters and events that have built up the story so far. We can do this even if they are separated by thousands of words from the current text, or long stretches of time between readings. During our life, we build up and retain memories that tell us where we live, what we have experienced, and who we are. Adding memory to artificial neural networks has been transformative in machine learning, allowing models to extract structure from temporal data, and more accurately model the future. However the capacity for long-range reasoning in current memory-augmented neural networks is considerably limited, in comparison to humans, despite the access to powerful modern computers. This thesis explores two prominent approaches towards scaling artificial memories to lifelong capacity: sparse access and compressive memory structures. With sparse access, the inspection, retrieval, and updating of only a very small subset of pertinent memory is considered. It is found that sparse memory access is beneficial for learning, allowing for improved data-efficiency and improved generalisation. From a computational perspective - sparsity allows scaling to memories with millions of entities on a simple CPU-based machine. It is shown that memory systems that compress the past to a smaller set of representations reduce redundancy and can speed up the learning of rare classes and improve upon classical data-structures in database systems. Compressive memory architectures are also devised for sequence prediction tasks and are observed to significantly increase the state-of-the-art in modelling natural language

    Using Neural Networks to Simulate the Alzheimer's Disease

    Get PDF
    Making use of biologically plausible artificial neural networks that implement Grossberg’s presynaptic learning rule, we simulate the possible effects of calcium dysregulation in the neuron’s activation function, to represent the most accepted model of Alzheimer's Disease: the calcium dysregulation hypothesis. According to Cudmore and Turrigiano calcium dysregulation alters the shifting dynamic of the neuron’s activation function (intrinsic plasticity). We propose that this alteration might affect the stability of synaptic weights in which memories are stored. The results of the simulation supported the theoretical hypothesis, implying that the emergence of Alzheimer's disease's symptoms such as memory loss and learning problems might be correlated to intrinsic neuronal plasticity impairment due to calcium dysregulation
    corecore