376 research outputs found
Perception-action circuits for word learning and semantic grounding: a neurocomputational model and neuroimaging study
A neurocomputational architecture of the left-hemispheric areas of the brain is presented which was used to simulate and explain neural correlates of word learning and semantic grounding. The model’s main distinguishing features are that (i) it replicates connectivity and anatomical structure of the relevant brain areas, and (ii) it implements only functional mechanisms reflecting known cellular- and synaptic-level properties of the cerebral cortex. Stimulation of the “sensorimotor” model areas (mimicking early stages of word acquisition) leads to the spontaneous formation of cell assemblies (CAs), network correlates of memory traces for meaningful words. Preliminary results of a recent functional Magnetic Resonance Imaging study confirm the model's predictions, and, for the first time, localise the neural correlates of semantic grounding of novel spoken items in primary visual cortex. Taken together, these results provide strong support for perceptual accounts of word meaning acquisition in the brain, and point to a unifying theory of cognition based on action-perception circuits whose emergence, dynamics and interactions are grounded in known neuroanatomy and neurobiological learning mechanisms
Semantic grounding of novel spoken words in the primary visual cortex
Embodied theories of grounded semantics postulate that, when word meaning is first acquired, a link is established between symbol (word form) and corresponding semantic information present in modality-specific – including primary – sensorimotor cortices of the brain. Direct experimental evidence documenting the emergence of such a link (i.e., showing that presentation of a previously unknown, meaningless word sound induces, after learning, category specific reactivation of relevant primary sensory or motor brain areas), however, is still missing. Here, we present new neuroimaging results that provide such evidence.
We taught participants aspects of the referential meaning of previously unknown, senseless novel spoken words (such as “Shruba” or “Flipe”) by associating them with either a familiar action or a familiar object. After training, we used functional magnetic resonance imaging to analyse the participants’ brain responses to the new speech items. We found that hearing the newly learnt object-related word sounds selectively triggered activity in primary visual cortex, as well as secondary and higher visual areas.
These results for the first time directly document the formation of a link between novel, previously meaningless spoken items and corresponding semantic information in primary sensory areas in a category specific manner, providing experimental support for perceptual accounts of word meaning acquisition in the brain
Intensive communicative therapy reduces symptoms of depression in chronic non-fluent aphasia
Background. Patients with brain lesions and resultant chronic aphasia frequently suffer from depression. However, no effective interventions are available to target neuropsychiatric symptoms in patients with aphasia who have severe language and communication deficits. Objective. The present study aimed to investigate the efficacy of 2 different methods of speech and language therapy in reducing symptoms of depression in aphasia on the Beck Depression Inventory (BDI) using secondary analysis (BILAT-1 trial). Methods. In a crossover randomized controlled trial, 18 participants with chronic nonfluent aphasia following left-hemispheric brain lesions were assigned to 2 consecutive treatments: (1) intensive language-action therapy (ILAT), emphasizing communicative language use in social interaction, and (2) intensive naming therapy (INT), an utterance-centered standard method. Patients were randomly assigned to 2 groups, receiving both treatments in counterbalanced order. Both interventions were applied for 3.5 hours daily over a period of 6 consecutive working days. Outcome measures included depression scores on the BDI and a clinical language test (Aachen Aphasia Test). Results. Patients showed a significant decrease in symptoms of depression after ILAT but not after INT, which paralleled changes on clinical language tests. Treatment-induced decreases in depression scores persisted when controlling for individual changes in language performance. Conclusions. Intensive training of behaviorally relevant verbal communication in social interaction might help reduce symptoms of depression in patients with chronic nonfluent aphasia
Influence of verbal labels on concept formation and perception in a deep unsupervised neural network model
OBJECTIVES/RESEARCH QUESTION:
Whether language influences perception and thought remains a subject of intense debate. Would the presence or absence of a linguistic label facilitate or hinder the acquisition of new concepts? We here address this question in a neurocomputational model.
METHODS:
We used a computational brain model of fronto-occipital (extrasylvian) and fronto-temporal (perisylvian) cortex including spiking neurons. With Hebbian learning, the network was trained to associate word forms (phonological patterns, or “labels”) in perisylvian areas with semantic grounding information (sensory-motor patterns, or “percepts”) in extrasylvian areas. To study the effects of labels on the network’s ability to spontaneously develop distinct semantic representations from the multiple perceptual instances of a concept, we modelled each to-be-learned concept as a triplet of partly overlapping percepts and trained the model under two conditions: each instance of a perceptual triplet (patterns in extrasylvian areas) was repeatedly paired with patterns in perisylvian areas consisting of either (1) a corresponding word form (label condition), or (2) white noise (no-label condition).
To quantify the emergence of neuronal representations for the conceptually-related percepts, we measured the dissimilarity (Euclidean distance) of neuronal activation vectors during perceptual stimulation. Category learning performance was measured as the difference between within- and between-concept dissimilarity values (DissimDiff) of perceptual activation patterns.
RESULTS:
The presence or absence of a linguistic label had a significant main effect on category learning (F=2476, p<0.0001, DissimDiff with labels m=0.92, SD=0.32; no-labels m=0.36, SD=0.21). DissimDiff values were also significantly larger in areas most important for semantic processing, so-called semantic-hubs, than in sensorimotor areas (main effect of centrality, F=2535, p<0.0001). Finally, a significant interaction between centrality and label type (F=711, p<0.0001) revealed that the label-related learning advantage was most pronounced in semantic hubs.
CONCLUSION:
These results suggest that providing a referential verbal label during the acquisition of a new concept significantly improves the cortex’ ability to develop distinct semantic-category representations from partly-overlapping (and non-overlapping) perceptual instances. Crucially, this effect is most pronounced in higher-order semantic-hub areas of the network. In sum, our results provide the first neurocomputational evidence for a “Whorfian” effect of language on perception and concept formation
Influence of language on concept formation and perception in a brain-constrained deep neural network model
Whether language influences perception and thought remains a subject of intense debate (1, 2). We address this question in a brain-constrained neurocomputational model (3) of fronto-occipital (extrasylvian) and fronto-temporal (perisylvian) cortex including spiking neurons. The unsupervised neural network was simultaneously presented with word forms (phonological patterns, “labels”) in perisylvian areas and semantic grounding information (sensory-motor patterns, “percepts”) in extrasylvian areas representing either concrete or abstract concepts. Following the approach used in a previous simulation (4), each to-be-learned concept was modeled as a triplet of partly overlapping percepts; the models were trained under two conditions: each instance of a perceptual triplet (patterns in extrasylvian areas) was repeatedly paired with patterns in perisylvian areas consisting of either (a) a corresponding word form (label condition), or (b) noise (no-label condition). We quantified the emergence of neuronal representations for the conceptually-related percepts using dissimilarity (Euclidean distance) of neuronal activation vectors during perceptual stimulation. Category learning was measured as the difference between within- and between concept dissimilarity values (DissimDiff) of perceptual activation patterns.
A repeated-measures ANOVA with factors SemanticType (concrete/abstract) and Labelling showed main effects of both SemanticType and Label, and a significant interaction. We also quantified the “label effect” in percentage change from NoLabel to Label conditions, separately for between- and within-category dissimilarities. This showed that the label effect was mainly driven by changes in between-category dissimilarity, was significantly larger for abstract than concrete concepts, and became even larger in the “deeper” layers of the model.
Providing a referential verbal label during the acquisition of a new concept significantly improves the cortex’ ability to develop distinct semantic-category representations from partly-overlapping (and non-overlapping) perceptual instances. Crucially, this effect is most pronounced in higher order semantic-hub areas of the network. These results provide the first neurocomputational evidence for a “Whorfian” effect of language on perception and concept formation
Sensorimotor semantics on the spot: brain activity dissociates between conceptual categories within 150 ms
Although semantic processing has traditionally been associated with brain responses maximal at 350–400 ms, recent studies reported that words of different semantic types elicit topographically distinct brain responses substantially earlier, at 100–200 ms. These earlier responses have, however, been achieved using insufficiently precise source localisation techniques, therefore casting doubt on reported differences in brain generators. Here, we used high-density MEG-EEG recordings in combination with individual MRI images and state-of-the-art source reconstruction techniques to compare localised early activations elicited by words from different semantic categories in different cortical areas. Reliable neurophysiological word-category dissociations emerged bilaterally at ~ 150 ms, at which point action-related words most strongly activated frontocentral motor areas and visual object-words occipitotemporal cortex. These data now show that different cortical areas are activated rapidly by words with different meanings and that aspects of their category-specific semantics is reflected by dissociating neurophysiological sources in motor and visual brain systems
Conceptual grounding of language in action and perception: a neurocomputational model of the emergence of category specificity and semantic hubs
Current neurobiological accounts of language and cognition offer diverging views on the questions of ‘where’ and ‘how’ semantic information is stored and processed in the human brain. Neuroimaging data showing consistent activation of different multi-modal areas during word and sentence comprehension suggest that all meanings are processed indistinctively, by a set of general semantic centres or ‘hubs’. However, words belonging to specific semantic categories selectively activate modality-preferential areas; for example, action-related words spark activity in dorsal motor cortex, whereas object-related ones activate ventral visual areas. The evidence for category-specific and category-general semantic areas begs for a unifying explanation, able to integrate the emergence of both. Here, a neurobiological model offering such an explanation is described. Using a neural architecture repli- cating anatomical and neurophysiological features of frontal, occipital and temporal cortices, basic aspects of word learning and semantic grounding in action and perception were simulated. As the network underwent training, distributed lexico-semantic cir- cuits spontaneously emerged. These circuits exhibited different cortical distributions that reached into dorsal-motor or ventral- visual areas, reflecting the correlated category-specific sensorimotor patterns that co-occurred during action- or object-related semantic grounding, respectively. Crucially, substantial numbers of neurons of both types of distributed circuits emerged in areas interfacing between modality-preferential regions, i.e. in multimodal connection hubs, which therefore became loci of general semantic binding. By relating neuroanatomical structure and cellular-level learning mechanisms with system-level cognitive func- tion, this model offers a neurobiological account of category-general and category-specific semantic areas based on the different cortical distributions of the underlying semantic circuits
Long-term stability of short-term intensive language–action therapy in chronic aphasia: A 1–2 year follow-up study
Background. Intensive aphasia therapy can improve language functions in chronic aphasia over a short therapy interval of 2-4 weeks. For one intensive method, intensive language-action therapy, beneficial effects are well documented by a range of randomized controlled trials. However, it is unclear to date whether therapy-related improvements are maintained over years. Objective. The current study aimed at investigating long-term stability of ILAT treatment effects over circa 1-2 years (8-30 months). Methods. 38 patients with chronic aphasia participated in ILAT and were re-assessed at a follow-up assessment 8-30 months after treatment, which had been delivered 6-12.5 hours per week for 2-4 weeks. Results. A standardized clinical aphasia battery, the Aachen Aphasia Test, revealed significant improvements with ILAT that were maintained for up to 2.5 years. Improvements were relatively better preserved in comparatively young patients (<60 years). Measures of communicative efficacy confirmed improvements during intensive therapy but showed inconsistent long-term stability effects. Conclusions. The present data indicate that gains resulting from intensive speech-language therapy with ILAT are maintained up to 2.5 years after the end of treatment. We discuss this novel finding in light of a possible move from sparse to intensive therapy regimes in clinical practice
- …