74 research outputs found

    Towards Teaching a Robot to Count Objects

    Get PDF
    We present here an example of incremental learning between two computational models dealing with different modalities: a model allowing to switch spatial visual attention and a model allowing to learn the ordinal sequence of phonetical numbers. Their merging via a common reward signal allows anyway to produce a cardinal counting behaviour that can be implemented on a robot

    A Computational Model of Spatial Memory Anticipation during Visual Search

    Get PDF
    Some visual search tasks require to memorize the location of stimuli that have been previously scanned. Considerations about the eye movements raise the question of how we are able to maintain a coherent memory, despite the frequent drastically changes in the perception. In this article, we present a computational model that is able to anticipate the consequences of the eye movements on the visual perception in order to update a spatial memor

    A Computational Model of Basal Ganglia and its Role in Memory Retrieval in Rewarded Visual Memory Tasks

    Get PDF
    Visual working memory (WM) tasks involve a network of cortical areas such as inferotemporal, medial temporal and prefrontal cortices. We suggest here to investigate the role of the basal ganglia (BG) in the learning of delayed rewarded tasks through the selective gating of thalamocortical loops. We designed a computational model of the visual loop linking the perirhinal cortex, the BG and the thalamus, biased by sustained representations in prefrontal cortex. This model learns concurrently different delayed rewarded tasks that require to maintain a visual cue and to associate it to itself or to another visual object to obtain reward. The retrieval of visual information is achieved through thalamic stimulation of the perirhinal cortex. The input structure of the BG, the striatum, learns to represent visual information based on its association to reward, while the output structure, the substantia nigra pars reticulata, learns to link striatal representations to the disinhibition of the correct thalamocortical loop. In parallel, a dopaminergic cell learns to associate striatal representations to reward and modulates learning of connections within the BG. The model provides testable predictions about the behavior of several areas during such tasks, while providing a new functional organization of learning within the BG, putting emphasis on the learning of the striatonigral connections as well as the lateral connections within the substantia nigra pars reticulata. It suggests that the learning of visual WM tasks is achieved rapidly in the BG and used as a teacher for feedback connections from prefrontal cortex to posterior cortices

    a code generation approach to neural simulations on parallel hardware

    Get PDF
    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions

    Implicit neural representations for deep drawing and joining experiments

    Get PDF
    geometries in many industrial applications. Although simulations using Finite Element Methods (FEM) have helped in steering toward that goal, they are particularly time-consuming for large 3D meshes. Searching for the process parameters that lead to the desired shape of a metal part can become extremely expensive in terms of man-hours and computational resources. We investigated how machine learning models, especially deep neural networks, can help in speeding up the design process of deep drawing and joining processes by allowing a fast interpolation of FEM simulations from minutes or hours to seconds. In this study, inspired by implicit representations of 3D objects using neural networks, an implicit approach is used to predict local properties such as the thickness of the metal sheet, its thinning, and plastic strain, using solely the process parameters defining the experiment. We observe that the low number of trainable parameters of the predicting model ensures a generalization to unseen process parameters and ultimately allows for a reliable fast inspection of the processes

    Emergence of Attention within a Neural Population

    Get PDF
    We present a dynamic model of attention based on the Continuum Neural Field Theory that explains attention as being an emergent property of a neural population. This model is experimentally proved to be very robust and able to track one static or moving target in the presence of very strong noise or in the presence of a lot of distractors, even more salient than the target. This attentional property is not restricted to the visual case and can be considered as a generic attentional process of any spatio-temporal continuous input

    Visual Category learning by means of Basal Ganglia

    Get PDF

    A distributed computational model of spatial memory anticipation during a visual search task

    Get PDF
    Some visual search tasks require the memorization of the location of stimuli that have been previously focused. Considerations about the eye movements raise the question of how we are able to maintain a coherent memory, despite the frequent drastic changes in the perception. In this article, we present a computational model that is able to anticipate the consequences of eye movements on visual perception in order to update a spatial working memory

    Reducing connectivity by using cortical modular bands

    Get PDF
    The way information is represented and processed in a neural network may have important consequences on its computational power and complexity. Basically, information representation refers to distributed or localist encoding and information processing refers to schemes of connectivity that can be complete or minimal. In the past, theoretical and biologically inspired approaches of neural computation have insisted on complementary views (respectively distributed and complete versus localist and minimal) with complementary arguments (complexity versus expressiveness). In this paper, we report experiments on biologically inspired neural networks performing sensorimotor coordination that indicate that a localist and minimal view may have good performances if some connectivity constraints (also coming from biological inspiration) are respected

    A Neural Spiking Approach Compared to Deep Feedforward Networks on Stepwise Pixel Erasement

    Full text link
    In real world scenarios, objects are often partially occluded. This requires a robustness for object recognition against these perturbations. Convolutional networks have shown good performances in classification tasks. The learned convolutional filters seem similar to receptive fields of simple cells found in the primary visual cortex. Alternatively, spiking neural networks are more biological plausible. We developed a two layer spiking network, trained on natural scenes with a biologically plausible learning rule. It is compared to two deep convolutional neural networks using a classification task of stepwise pixel erasement on MNIST. In comparison to these networks the spiking approach achieves good accuracy and robustness.Comment: Published in ICANN 2018: Artificial Neural Networks and Machine Learning - ICANN 2018 https://link.springer.com/chapter/10.1007/978-3-030-01418-6_25 The final authenticated publication is available online at https://doi.org/10.1007/978-3-030-01418-6_2
    • …
    corecore