5 research outputs found

    A Semantic Model to Study Neural Organization of Language in Bilingualism

    Get PDF
    A neural network model of object semantic representation is used to simulate learning of new words from a foreign language. The network consists of feature areas, devoted to description of object properties, and a lexical area, devoted to words representation. Neurons in the feature areas are implemented as Wilson-Cowan oscillators, to allow segmentation of different simultaneous objects via gamma-band synchronization. Excitatory synapses among neurons in the feature and lexical areas are learned, during a training phase, via a Hebbian rule. In this work, we first assume that some words in the first language (L1) and the corresponding object representations are initially learned during a preliminary training phase. Subsequently, second-language (L2) words are learned by simultaneously presenting the new word together with the L1 one. A competitive mechanism between the two words is also implemented by the use of inhibitory interneurons. Simulations show that, after a weak training, the L2 word allows retrieval of the object properties but requires engagement of the first language. Conversely, after a prolonged training, the L2 word becomes able to retrieve object per se. In this case, a conflict between words can occur, requiring a higher-level decision mechanism

    The Representation of Objects in the Brain, and Its Link with Semantic Memory and Language: a Conceptual Theory with the Support of a Neurocomputational Model

    Get PDF
    Recognition of objects, their representation and retrieval in memory and the link of this representation with words is a hard cognitive problem, which can be summarized with the term “lexico-semantic memory”. Several recent cognitive theories suggest that the semantic representation of objects is a distributed process, which engages different brain areas in the sensory and motor regions. A further common hypothesis is that each region is organized by conceptual features, that are highly correlated, and neurally contiguous. These theories may be useful to explain the results of clinical tests on patients with lesions of the brain, who exhibit deficits in recognizing objects from words or in evoking words from objects, or to explain the use of appropriate words in bilingual subjects. The study of the cognitive aspects of lexico-semantic memory representation may benefit from the use of mathematical models and computer simulations. Aim of this chapter is to describe a theoretical model of the lexico-semantic system, which can be used by cognitive neuroscientists to summarize conceptual theories into a rigorous quantitative framework, to test the ability of these theories to reproduce real pieces of behavior in healthy and pathological subjects, and to suggest new hypotheses for subsequent testing. The chapter is structured as follows: first the basic assumptions on cognitive aspects of the lexico-semantic memory model are clearly presented; the same aspects are subsequently illustrated via the results of computer simulations using abstract object representations as input to the model. Equations are then reported in an Appendix for readers interested to mathematical issues. The model is based on the following main assumptions: i) an object is represented as a collection of features, topologically ordered according to a similarity principle in different brain areas; ii) the features belonging to the same object are linked together via a Hebbian process during a phase in which objects are presented individually; iii) features are described via neural oscillators in the gamma band. As a consequence, different object representations can be maintained simultaneously in memory, via synchronization of the corresponding features (binding and segmentation problem); iv) words are represented in a lexical area devoted to recognition of words from phonemes; v) words in the lexical area and the features representing objects are linked together via a Hebbian mechanism during a learning phase in which a word is presented together with the corresponding object; vi) the same object representation can be associated to two alternative words (for instance to represent bilinguism). In this case, the two words are connected via inhibitory synapses, to implement a competition among them. vii) the choice of words is further selected by an external inhibitory control system, which suppresses words which do not correspond to the present objective (for instance to choose between alternative languages). Several exempla of model possibilities are presented, with the use of abstract words. These exempla comprehend: the possibility to retrieve objects and words even in case of incomplete or corrupted information on object features; the possibility to establish a semantic link between words with superimposed features; the process of learning a second language (L2) with the support of a language previously known (L1) to represent neurocognitive aspects of bilinguism

    A neural network model of semantic memory linking feature-based object representation and words

    No full text
    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits)

    Cognitive Maps

    Get PDF
    undefine

    Mathematical models of cognitive processes

    Get PDF
    The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry
    corecore