112 research outputs found

    A Computational Model of the Lexical-Semantic System Based on a Grounded Cognition Approach

    Get PDF
    This work presents a connectionist model of the semantic-lexical system based on grounded cognition. The model assumes that the lexical and semantic aspects of language are memorized in two distinct stores. The semantic properties of objects are represented as a collection of features, whose number may vary among objects. Features are described as activation of neural oscillators in different sensory-motor areas (one area for each feature) topographically organized to implement a similarity principle. Lexical items are represented as activation of neural groups in a different layer. Lexical and semantic aspects are then linked together on the basis of previous experience, using physiological learning mechanisms. After training, features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. The model also includes some inhibitory synapses: features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Moreover, different objects and the corresponding words can be simultaneously retrieved and segmented via a time division in the gamma-band. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurring during learning. The model simulates the formation of categories, assuming that objects belong to the same category if they share some features. Simple exempla are shown to illustrate how words representing a category can be distinguished from words representing individual members. Finally, the model can be used to simulate patients with focalized lesions, assuming an impairment of synaptic strength in specific feature areas

    Neural Networks and Connectivity among Brain Regions

    Get PDF
    As is widely understood, brain functioning depends on the interaction among several neural populations, which are linked via complex connectivity circuits and work together (in antagonistic or synergistic ways) to exchange information, synchronize their activity, adapt plastically to external stimuli or internal requirements, and more generally to participate in solving multifaceted cognitive tasks [...]

    Possible mechanisms underlying tilt aftereffect in the primary visual cortex: A critical analysis with the aid of simple computational models

    Get PDF
    AbstractA mathematical model of orientation selectivity in a single hypercolumn of the primary visual cortex developed in a previous work [Ursino, M., & La Cara, G.-E. (2004). Comparison of different models of orientation selectivity based on distinct intracortical inhibition rules. Vision Research, 44, 1641–1658] was used to analyze the possible mechanisms underlying tilt aftereffect (TAE). Two alternative models are considered, based on a different arrangement of intracortical inhibition (an anti-phase model in which inhibition is in phase opposition with excitation, and an in-phase model in which inhibition has the same phase arrangement as excitation but wider orientation selectivity). Different combinations of parameter changes were tested to explain TAE: a threshold increase in excitatory and inhibitory cortical neurons (fatigue), a decrease in intracortical excitation, an increase or a decrease in intracortical inhibition, a decrease in thalamo-cortical synapses. All synaptic changes were calculated on the basis of Hebbian (or anti-Hebbian) rules. Results demonstrated that the in-phase model accounts for several literature results with different combinations of parameter changes requiring: (i) a depressive mechanism to neurons with preferred orientation close to the adaptation orientation (fatigue of excitatory cortical neurons, and/or depression of thalamo-cortical synapses directed to excitatory neurons, and/or depression of intracortical excitatory synapses); (ii) a facilitatory mechanism to neurons with preferred orientation far from the adaptation orientation (fatigue of inhibitory cortical neurons, and/or depression of thalamo-cortical synapses directed to inhibitory neurons, and/or depression of intracortical inhibitory synapses). By contrast, the anti-phase model appeared less suitable to explain experimental data

    Organization, Maturation, and Plasticity of Multisensory Integration: Insights from Computational Modeling Studies

    Get PDF
    In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions

    The Representation of Objects in the Brain, and Its Link with Semantic Memory and Language: a Conceptual Theory with the Support of a Neurocomputational Model

    Get PDF
    Recognition of objects, their representation and retrieval in memory and the link of this representation with words is a hard cognitive problem, which can be summarized with the term “lexico-semantic memory”. Several recent cognitive theories suggest that the semantic representation of objects is a distributed process, which engages different brain areas in the sensory and motor regions. A further common hypothesis is that each region is organized by conceptual features, that are highly correlated, and neurally contiguous. These theories may be useful to explain the results of clinical tests on patients with lesions of the brain, who exhibit deficits in recognizing objects from words or in evoking words from objects, or to explain the use of appropriate words in bilingual subjects. The study of the cognitive aspects of lexico-semantic memory representation may benefit from the use of mathematical models and computer simulations. Aim of this chapter is to describe a theoretical model of the lexico-semantic system, which can be used by cognitive neuroscientists to summarize conceptual theories into a rigorous quantitative framework, to test the ability of these theories to reproduce real pieces of behavior in healthy and pathological subjects, and to suggest new hypotheses for subsequent testing. The chapter is structured as follows: first the basic assumptions on cognitive aspects of the lexico-semantic memory model are clearly presented; the same aspects are subsequently illustrated via the results of computer simulations using abstract object representations as input to the model. Equations are then reported in an Appendix for readers interested to mathematical issues. The model is based on the following main assumptions: i) an object is represented as a collection of features, topologically ordered according to a similarity principle in different brain areas; ii) the features belonging to the same object are linked together via a Hebbian process during a phase in which objects are presented individually; iii) features are described via neural oscillators in the gamma band. As a consequence, different object representations can be maintained simultaneously in memory, via synchronization of the corresponding features (binding and segmentation problem); iv) words are represented in a lexical area devoted to recognition of words from phonemes; v) words in the lexical area and the features representing objects are linked together via a Hebbian mechanism during a learning phase in which a word is presented together with the corresponding object; vi) the same object representation can be associated to two alternative words (for instance to represent bilinguism). In this case, the two words are connected via inhibitory synapses, to implement a competition among them. vii) the choice of words is further selected by an external inhibitory control system, which suppresses words which do not correspond to the present objective (for instance to choose between alternative languages). Several exempla of model possibilities are presented, with the use of abstract words. These exempla comprehend: the possibility to retrieve objects and words even in case of incomplete or corrupted information on object features; the possibility to establish a semantic link between words with superimposed features; the process of learning a second language (L2) with the support of a language previously known (L1) to represent neurocognitive aspects of bilinguism

    Multisensory bayesian inference depends on synapse maturation during training: Theoretical analysis and neural modeling implementation

    Get PDF
    Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding-the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. Thework includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In crossmodal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues

    Crossmodal Links between Vision and Touch in Spatial Attention: A Computational Modelling Study

    Get PDF
    Many studies have revealed that attention operates across different sensory modalities, to facilitate the selection of relevant information in the multimodal situations of every-day life. Cross-modal links have been observed either when attention is directed voluntarily (endogenous) or involuntarily (exogenous). The neural basis of cross-modal attention presents a significant challenge to cognitive neuroscience. Here, we used a neural network model to elucidate the neural correlates of visual-tactile interactions in exogenous and endogenous attention. The model includes two unimodal (visual and tactile) areas connected with a bimodal area in each hemisphere and a competition between the two hemispheres. The model is able to explain cross-modal facilitation both in exogenous and endogenous attention, ascribing it to an advantaged activation of the bimodal area on the attended side (via a top-down or bottom-up biasing), with concomitant inhibition towards the opposite side. The model suggests that a competitive/cooperative interaction with biased competition may mediate both forms of cross-modal attention

    Motor decoding from the posterior parietal cortex using deep neural networks

    Get PDF
    Objective. Motor decoding is crucial to translate the neural activity for brain-computer interfaces (BCIs) and provides information on how motor states are encoded in the brain. Deep neural networks (DNNs) are emerging as promising neural decoders. Nevertheless, it is still unclear how different DNNs perform in different motor decoding problems and scenarios, and which network could be a good candidate for invasive BCIs. Approach. Fully-connected, convolutional, and recurrent neural networks (FCNNs, CNNs, RNNs) were designed and applied to decode motor states from neurons recorded from V6A area in the posterior parietal cortex (PPC) of macaques. Three motor tasks were considered, involving reaching and reach-to-grasping (the latter under two illumination conditions). DNNs decoded nine reaching endpoints in 3D space or five grip types using a sliding window approach within the trial course. To evaluate decoders simulating a broad variety of scenarios, the performance was also analyzed while artificially reducing the number of recorded neurons and trials, and while performing transfer learning from one task to another. Finally, the accuracy time course was used to analyze V6A motor encoding. Main results. DNNs outperformed a classic Naive Bayes classifier, and CNNs additionally outperformed XGBoost and Support Vector Machine classifiers across the motor decoding problems. CNNs resulted the top-performing DNNs when using less neurons and trials, and task-to-task transfer learning improved performance especially in the low data regime. Lastly, V6A neurons encoded reaching and reach-to-grasping properties even from action planning, with the encoding of grip properties occurring later, closer to movement execution, and appearing weaker in darkness. Significance. Results suggest that CNNs are effective candidates to realize neural decoders for invasive BCIs in humans from PPC recordings also reducing BCI calibration times (transfer learning), and that a CNN-based data-driven analysis may provide insights about the encoding properties and the functional roles of brain regions

    Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks

    Get PDF
    Despite the well-recognized role of the posterior parietal cortex (PPC) in processing sensory information to guide action, the differential encoding properties of this dynamic processing, as operated by different PPC brain areas, are scarcely known. Within the monkey's PPC, the superior parietal lobule hosts areas V6A, PEc, and PE included in the dorso-medial visual stream that is specialized in planning and guiding reaching movements. Here, a Convolutional Neural Network (CNN) approach is used to investigate how the information is processed in these areas. We trained two macaque monkeys to perform a delayed reaching task towards 9 positions (distributed on 3 different depth and direction levels) in the 3D peripersonal space. The activity of single cells was recorded from V6A, PEc, PE and fed to convolutional neural networks that were designed and trained to exploit the temporal structure of neuronal activation patterns, to decode the target positions reached by the monkey. Bayesian Optimization was used to define the main CNN hyper-parameters. In addition to discrete positions in space, we used the same network architecture to decode plausible reaching trajectories. We found that data from the most caudal V6A and PEc areas outperformed PE area in the spatial position decoding. In all areas, decoding accuracies started to increase at the time the target to reach was instructed to the monkey, and reached a plateau at movement onset. The results support a dynamic encoding of the different phases and properties of the reaching movement differentially distributed over a network of interconnected areas. This study highlights the usefulness of neurons' firing rate decoding via CNNs to improve our understanding of how sensorimotor information is encoded in PPC to perform reaching movements. The obtained results may have implications in the perspective of novel neuroprosthetic devices based on the decoding of these rich signals for faithfully carrying out patient's intentions.(C) 2022 Published by Elsevier Ltd

    Relationship between electroencephalographic data and comfort perception captured in a Virtual Reality design environment of an aircraft cabin

    Get PDF
    Successful aircraft cabin design depends on how the different stakeholders are involved since the first phases of product development. To predict passenger satisfaction prior to the manufacturing phase, human response was investigated in a Virtual Reality (VR) environment simulating a cabin aircraft. Subjective assessments of virtual designs have been collected via questionnaires, while the underlying neural mechanisms have been captured through electroencephalographic (EEG) data. In particular, we focused on the modulation of EEG alpha rhythm as a valuable marker of the brain's internal state and investigated which changes in alpha power and connectivity can be related to a different visual comfort perception by comparing groups with higher and lower comfort rates. Results show that alpha-band power decreased in occipital regions during subjects' immersion in the virtual cabin compared with the relaxation state, reflecting attention to the environment. Moreover, alpha-band power was modulated by comfort perception: lower comfort was associated with a lower alpha power compared to higher comfort. Further, alpha-band Granger connectivity shows top-down mechanisms in higher comfort participants, modulating attention and restoring partial relaxation. Present results contribute to understanding the role of alpha rhythm in visual comfort perception and demonstrate that VR and EEG represent promising tools to quantify human-environment interactions
    corecore