50 research outputs found

    A Computational Model of the Lexical-Semantic System Based on a Grounded Cognition Approach

    Get PDF
    This work presents a connectionist model of the semantic-lexical system based on grounded cognition. The model assumes that the lexical and semantic aspects of language are memorized in two distinct stores. The semantic properties of objects are represented as a collection of features, whose number may vary among objects. Features are described as activation of neural oscillators in different sensory-motor areas (one area for each feature) topographically organized to implement a similarity principle. Lexical items are represented as activation of neural groups in a different layer. Lexical and semantic aspects are then linked together on the basis of previous experience, using physiological learning mechanisms. After training, features which frequently occurred together, and the corresponding word-forms, become linked via reciprocal excitatory synapses. The model also includes some inhibitory synapses: features in the semantic network tend to inhibit words not associated with them during the previous learning phase. Simulations show that after learning, presentation of a cue can evoke the overall object and the corresponding word in the lexical area. Moreover, different objects and the corresponding words can be simultaneously retrieved and segmented via a time division in the gamma-band. Word presentation, in turn, activates the corresponding features in the sensory-motor areas, recreating the same conditions occurring during learning. The model simulates the formation of categories, assuming that objects belong to the same category if they share some features. Simple exempla are shown to illustrate how words representing a category can be distinguished from words representing individual members. Finally, the model can be used to simulate patients with focalized lesions, assuming an impairment of synaptic strength in specific feature areas

    Multisensory bayesian inference depends on synapse maturation during training: Theoretical analysis and neural modeling implementation

    Get PDF
    Recent theoretical and experimental studies suggest that in multisensory conditions, the brain performs a near-optimal Bayesian estimate of external events, giving more weight to the more reliable stimuli. However, the neural mechanisms responsible for this behavior, and its progressive maturation in a multisensory environment, are still insufficiently understood. The aim of this letter is to analyze this problem with a neural network model of audiovisual integration, based on probabilistic population coding-the idea that a population of neurons can encode probability functions to perform Bayesian inference. The model consists of two chains of unisensory neurons (auditory and visual) topologically organized. They receive the corresponding input through a plastic receptive field and reciprocally exchange plastic cross-modal synapses, which encode the spatial co-occurrence of visual-auditory inputs. A third chain of multisensory neurons performs a simple sum of auditory and visual excitations. Thework includes a theoretical part and a computer simulation study. We show how a simple rule for synapse learning (consisting of Hebbian reinforcement and a decay term) can be used during training to shrink the receptive fields and encode the unisensory likelihood functions. Hence, after training, each unisensory area realizes a maximum likelihood estimate of stimulus position (auditory or visual). In crossmodal conditions, the same learning rule can encode information on prior probability into the cross-modal synapses. Computer simulations confirm the theoretical results and show that the proposed network can realize a maximum likelihood estimate of auditory (or visual) positions in unimodal conditions and a Bayesian estimate, with moderate deviations from optimality, in cross-modal conditions. Furthermore, the model explains the ventriloquism illusion and, looking at the activity in the multimodal neurons, explains the automatic reweighting of auditory and visual inputs on a trial-by-trial basis, according to the reliability of the individual cues

    Possible mechanisms underlying tilt aftereffect in the primary visual cortex: A critical analysis with the aid of simple computational models

    Get PDF
    AbstractA mathematical model of orientation selectivity in a single hypercolumn of the primary visual cortex developed in a previous work [Ursino, M., & La Cara, G.-E. (2004). Comparison of different models of orientation selectivity based on distinct intracortical inhibition rules. Vision Research, 44, 1641–1658] was used to analyze the possible mechanisms underlying tilt aftereffect (TAE). Two alternative models are considered, based on a different arrangement of intracortical inhibition (an anti-phase model in which inhibition is in phase opposition with excitation, and an in-phase model in which inhibition has the same phase arrangement as excitation but wider orientation selectivity). Different combinations of parameter changes were tested to explain TAE: a threshold increase in excitatory and inhibitory cortical neurons (fatigue), a decrease in intracortical excitation, an increase or a decrease in intracortical inhibition, a decrease in thalamo-cortical synapses. All synaptic changes were calculated on the basis of Hebbian (or anti-Hebbian) rules. Results demonstrated that the in-phase model accounts for several literature results with different combinations of parameter changes requiring: (i) a depressive mechanism to neurons with preferred orientation close to the adaptation orientation (fatigue of excitatory cortical neurons, and/or depression of thalamo-cortical synapses directed to excitatory neurons, and/or depression of intracortical excitatory synapses); (ii) a facilitatory mechanism to neurons with preferred orientation far from the adaptation orientation (fatigue of inhibitory cortical neurons, and/or depression of thalamo-cortical synapses directed to inhibitory neurons, and/or depression of intracortical inhibitory synapses). By contrast, the anti-phase model appeared less suitable to explain experimental data

    Organization, Maturation, and Plasticity of Multisensory Integration: Insights from Computational Modeling Studies

    Get PDF
    In this paper, we present two neural network models – devoted to two specific and widely investigated aspects of multisensory integration – in order to evidence the potentialities of computational models to gain insight into the neural mechanisms underlying organization, development, and plasticity of multisensory integration in the brain. The first model considers visual–auditory interaction in a midbrain structure named superior colliculus (SC). The model is able to reproduce and explain the main physiological features of multisensory integration in SC neurons and to describe how SC integrative capability – not present at birth – develops gradually during postnatal life depending on sensory experience with cross-modal stimuli. The second model tackles the problem of how tactile stimuli on a body part and visual (or auditory) stimuli close to the same body part are integrated in multimodal parietal neurons to form the perception of peripersonal (i.e., near) space. The model investigates how the extension of peripersonal space – where multimodal integration occurs – may be modified by experience such as use of a tool to interact with the far space. The utility of the modeling approach relies on several aspects: (i) The two models, although devoted to different problems and simulating different brain regions, share some common mechanisms (lateral inhibition and excitation, non-linear neuron characteristics, recurrent connections, competition, Hebbian rules of potentiation and depression) that may govern more generally the fusion of senses in the brain, and the learning and plasticity of multisensory integration. (ii) The models may help interpretation of behavioral and psychophysical responses in terms of neural activity and synaptic connections. (iii) The models can make testable predictions that can help guiding future experiments in order to validate, reject, or modify the main assumptions

    The Representation of Objects in the Brain, and Its Link with Semantic Memory and Language: a Conceptual Theory with the Support of a Neurocomputational Model

    Get PDF
    Recognition of objects, their representation and retrieval in memory and the link of this representation with words is a hard cognitive problem, which can be summarized with the term “lexico-semantic memory”. Several recent cognitive theories suggest that the semantic representation of objects is a distributed process, which engages different brain areas in the sensory and motor regions. A further common hypothesis is that each region is organized by conceptual features, that are highly correlated, and neurally contiguous. These theories may be useful to explain the results of clinical tests on patients with lesions of the brain, who exhibit deficits in recognizing objects from words or in evoking words from objects, or to explain the use of appropriate words in bilingual subjects. The study of the cognitive aspects of lexico-semantic memory representation may benefit from the use of mathematical models and computer simulations. Aim of this chapter is to describe a theoretical model of the lexico-semantic system, which can be used by cognitive neuroscientists to summarize conceptual theories into a rigorous quantitative framework, to test the ability of these theories to reproduce real pieces of behavior in healthy and pathological subjects, and to suggest new hypotheses for subsequent testing. The chapter is structured as follows: first the basic assumptions on cognitive aspects of the lexico-semantic memory model are clearly presented; the same aspects are subsequently illustrated via the results of computer simulations using abstract object representations as input to the model. Equations are then reported in an Appendix for readers interested to mathematical issues. The model is based on the following main assumptions: i) an object is represented as a collection of features, topologically ordered according to a similarity principle in different brain areas; ii) the features belonging to the same object are linked together via a Hebbian process during a phase in which objects are presented individually; iii) features are described via neural oscillators in the gamma band. As a consequence, different object representations can be maintained simultaneously in memory, via synchronization of the corresponding features (binding and segmentation problem); iv) words are represented in a lexical area devoted to recognition of words from phonemes; v) words in the lexical area and the features representing objects are linked together via a Hebbian mechanism during a learning phase in which a word is presented together with the corresponding object; vi) the same object representation can be associated to two alternative words (for instance to represent bilinguism). In this case, the two words are connected via inhibitory synapses, to implement a competition among them. vii) the choice of words is further selected by an external inhibitory control system, which suppresses words which do not correspond to the present objective (for instance to choose between alternative languages). Several exempla of model possibilities are presented, with the use of abstract words. These exempla comprehend: the possibility to retrieve objects and words even in case of incomplete or corrupted information on object features; the possibility to establish a semantic link between words with superimposed features; the process of learning a second language (L2) with the support of a language previously known (L1) to represent neurocognitive aspects of bilinguism

    Mathematical models of cognitive processes

    Get PDF
    The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry

    A neurocomputational analysis of visual bias on bimanual tactile spatial perception during a crossmodal exposure

    Get PDF
    Vision and touch both support spatial information processing. These sensory systems also exhibit highly specific interactions in spatial perception, which may reflect multisensory representations that are learned through visuotactile (VT) experiences. Recently, Wani and colleagues reported that taskirrelevant visual cues bias tactile perception, in a brightness-dependent manner, on a task requiring participants to detect unimanual and bimanual cues. Importantly, tactile performance remained spatially biased after VT exposure, even when no visual cues were presented. These effects on bimanual touch conceivably reflect cross-modal learning, but the neural substrates that are changed by VT experience are unclear. We previously described a neural network capable of simulating VT spatial interactions. Here, we exploited this model to test different hypotheses regarding potential network-level changes that may underlie the VT learning effects. Simulation results indicated that VT learning effects are inconsistent with plasticity restricted to unisensory visual and tactile hand representations. Similarly, VT learning effects were also inconsistent with changes restricted to the strength of inter-hemispheric inhibitory interactions. Instead, we found that both the hand representations and the inter-hemispheric inhibitory interactions need to be plastic to fully recapitulate VT learning effects. Our results imply that crossmodal learning of bimanual spatial perception involves multiple changes distributed over a VT processing cortical network

    An Emergent Model of Multisensory Integration in Superior Colliculus Neurons

    Get PDF
    Neurons in the cat superior colliculus (SC) integrate information from different senses to enhance their responses to cross-modal stimuli. These multisensory SC neurons receive multiple converging unisensory inputs from many sources; those received from association cortex are critical for the manifestation of multisensory integration. The mechanisms underlying this characteristic property of SC neurons are not completely understood, but can be clarified with the use of mathematical models and computer simulations. Thus the objective of the current effort was to present a plausible model that can explain the main physiological features of multisensory integration based on the current neurological literature regarding the influences received by SC from cortical and subcortical sources. The model assumes the presence of competitive mechanisms between inputs, nonlinearities in NMDA receptor responses, and provides a priori synaptic weights to mimic the normal responses of SC neurons. As a result, it provides a basis for understanding the dependence of multisensory enhancement on an intact association cortex, and simulates the changes in the SC response that occur during NMDA receptor blockade. Finally, it makes testable predictions about why significant response differences are obtained in multisensory SC neurons when they are confronted with pairs of cross-modal and within-modal stimuli. By postulating plausible biological mechanisms to complement those that are already known, the model provides a basis for understanding how SC neurons are capable of engaging in this remarkable process

    Sicurezza Elettrica

    No full text

    esercizi

    No full text
    corecore