832 research outputs found

    Multimodal Sequence Learning with a Cortically-Inspired Model

    Get PDF
    Colloque avec actes et comité de lecture.We present in this paper a cortically-inspired model that learns to exploit regularities of sequences of perceptions and actions, with regard to motivations. Sequences are learned from continuous multimodal information streams, on the basis of a competition mechanism. This approach enables generalisations from perceptive regularities and also ensures the capability to apply results of previous learning to help further learning of new tasks

    Specialization within cortical models : An application to causality learning

    Get PDF
    Colloque avec actes et comité de lecture.In this paper we present the principle of learning by specialization within a cortically-inspired framework. Specialization of neurons in the cortex has been observed, and many models are using such "cortical-like" learning mechanisms, adapted for computational efficiency. Adaptations will be discussed, in light of experiments with our cortical model addressing causality learning from perceptive sequences

    Can Self-Organization Emerge through Dynamic Neural Fields Computation?

    Get PDF
    International audienceIn this paper, dynamic neural ïŹelds are used to develop key features of a cortically-inspired computational module. Under the perspective of designing computational systems that can exhibit the ïŹ‚exibility and genericity of the cortical substrate, using neural ïŹeld as the competition layer for self-organizing modules has to be considered. However, despite the fact that they serve as a biologically-inspired model, applying dynamic neural ïŹelds to drive self-organization is not straightforward. In order to address that issue, an original method for evaluating neural ïŹeld equations is proposed, based on statistical measurements of the ïŹeld behavior in some scenarios. Limitations of classical neural ïŹeld equations are then quantiïŹed, and an original ïŹeld equation is proposed to overcome these difficulties. The performance of the proposed ïŹeld model is discussed in comparison with some previously considered models, leading to the promotion of the proposed model as a suitable mean for processing competition in cortex-like computation for cognitive systems

    Synthetic-Neuroscore: Using A Neuro-AI Interface for Evaluating Generative Adversarial Networks

    Full text link
    Generative adversarial networks (GANs) are increasingly attracting attention in the computer vision, natural language processing, speech synthesis and similar domains. Arguably the most striking results have been in the area of image synthesis. However, evaluating the performance of GANs is still an open and challenging problem. Existing evaluation metrics primarily measure the dissimilarity between real and generated images using automated statistical methods. They often require large sample sizes for evaluation and do not directly reflect human perception of image quality. In this work, we describe an evaluation metric we call Neuroscore, for evaluating the performance of GANs, that more directly reflects psychoperceptual image quality through the utilization of brain signals. Our results show that Neuroscore has superior performance to the current evaluation metrics in that: (1) It is more consistent with human judgment; (2) The evaluation process needs much smaller numbers of samples; and (3) It is able to rank the quality of images on a per GAN basis. A convolutional neural network (CNN) based neuro-AI interface is proposed to predict Neuroscore from GAN-generated images directly without the need for neural responses. Importantly, we show that including neural responses during the training phase of the network can significantly improve the prediction capability of the proposed model. Materials related to this work are provided at https://github.com/villawang/Neuro-AI-Interface

    Happily entangled: prediction, emotion, and the embodied mind

    Get PDF
    Recent work in cognitive and computational neuroscience depicts the human cortex as a multi-level prediction engine. This ‘predictive processing’ framework shows great promise as a means of both understanding and integrating the core information processing strategies underlying perception, reasoning, and action. But how, if at all, do emotions and sub-cortical contributions fit into this emerging picture? The fit, we shall argue, is both profound and potentially transformative. In the picture we develop, online cognitive function cannot be assigned to either the cortical or the sub-cortical component, but instead emerges from their tight co-ordination. This tight co-ordination involves processes of continuous reciprocal causation that weave together bodily information and ‘top-down’ predictions, generating a unified sense of what’s out there and why it matters. The upshot is a more truly ‘embodied’ vision of the predictive brain in action

    Competitive Queing for Planning and Serial Performance

    Full text link

    CNS: a GPU-based framework for simulating cortically-organized networks

    Get PDF
    Computational models whose organization is inspired by the cortex are increasing in both number and popularity. Current instances of such models include convolutional networks, HMAX, Hierarchical Temporal Memory, and deep belief networks. These models present two practical challenges. First, they are computationally intensive. Second, while the operations performed by individual cells, or units, are typically simple, the code needed to keep track of network connectivity can quickly become complicated, leading to programs that are difficult to write and to modify. Massively parallel commodity computing hardware has recently become available in the form of general-purpose GPUs. This helps address the first problem but exacerbates the second. GPU programming adds an extra layer of difficulty, further discouraging exploration. To address these concerns, we have created a programming framework called CNS ('Cortical Network Simulator'). CNS models are automatically compiled and run on a GPU, typically 80-100x faster than on a single CPU, without the user having to learn any GPU programming. A novel scheme for the parametric specification of network connectivity allows the user to focus on writing just the code executed by a single cell. We hope that the ability to rapidly define and run cortically-inspired models will facilitate research in the cortical modeling community. CNS is available under the GNU General Public License
    • 

    corecore