2,484 research outputs found
Recognizing Speech in a Novel Accent: The Motor Theory of Speech Perception Reframed
The motor theory of speech perception holds that we perceive the speech of
another in terms of a motor representation of that speech. However, when we
have learned to recognize a foreign accent, it seems plausible that recognition
of a word rarely involves reconstruction of the speech gestures of the speaker
rather than the listener. To better assess the motor theory and this
observation, we proceed in three stages. Part 1 places the motor theory of
speech perception in a larger framework based on our earlier models of the
adaptive formation of mirror neurons for grasping, and for viewing extensions
of that mirror system as part of a larger system for neuro-linguistic
processing, augmented by the present consideration of recognizing speech in a
novel accent. Part 2 then offers a novel computational model of how a listener
comes to understand the speech of someone speaking the listener's native
language with a foreign accent. The core tenet of the model is that the
listener uses hypotheses about the word the speaker is currently uttering to
update probabilities linking the sound produced by the speaker to phonemes in
the native language repertoire of the listener. This, on average, improves the
recognition of later words. This model is neutral regarding the nature of the
representations it uses (motor vs. auditory). It serve as a reference point for
the discussion in Part 3, which proposes a dual-stream neuro-linguistic
architecture to revisits claims for and against the motor theory of speech
perception and the relevance of mirror neurons, and extracts some implications
for the reframing of the motor theory
The Role of Visual and Semantic Properties in the Emergence of Category-Specific Patterns of Neural Response in the Human Brain
Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged âŒ80â100 ms after stimulus onset and were still evident when the stimulus was no longer present (âŒ800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged âŒ80â100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (âŒ400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image
The roles of the "ventral" semantic and "dorsal" pathways in conduite d'approche: A neuroanatomically-constrained computational modeling investigation
Ever since the 19th century, the standard model for spoken language processing has assumed two pathways for repetitionâa phonological pathway and a semantic pathwayâand this idea has gained further support in the last decade. First, recent in vivo tractography studies have demonstrated both the âdorsalâ (via arcuate fasciculus) and âventralâ (via extreme capsule and uncinate fasciculus) pathways connecting from the primary auditory area to the speech-motor area, the latter of which passes through a brain area associated with semantic processing (anterior temporal lobe). Secondly, neuropsychological evidence for the role of semantics in repetition is conduite d'approche, a successive phonological improvement (sometimes non-improvement) in aphasic patients' response by repeating several times in succession. Crucially, conduite d'approche is observed in patients with neurological damage in/around the arcuate fasciculus. Successful conduite d'approche is especially clear for semantically-intact patients and it occurs for real words rather than for non-words. These features have led researchers to hypothesize that the patients' disrupted phonological output is âcleaned-upâ by intact lexical-semantic information before the next repetition. We tested this hypothesis using the neuroanatomically-constrained dual dorsal-ventral pathway computational model. The results showed that (a) damage to the dorsal pathway impaired repetition; (b) in the context of recovery, the model learned to compute a correct repetition response following the model's own noisy speech output (i.e., successful conduite d'approche); (c) this behavior was more evident for real words than non-words; and (d) activation from the ventral pathway contributed to the increased rate of successful conduite d'approche for real words. These results suggest that lexical-semantic âclean-upâ is key to this self-correcting mechanism, supporting the classic proposal of two pathways for repetition
Recommended from our members
Spring School on Language, Music, and Cognition: Organizing Events in Time
The interdisciplinary spring school âLanguage, music, and cognition: Organizing events in timeâ was held from February 26 to March 2, 2018 at the Institute of Musicology of the University of Cologne. Language, speech, and music as events in time were explored from different perspectives including evolutionary biology, social cognition, developmental psychology, cognitive neuroscience of speech, language, and communication, as well as computational and biological approaches to language and music. There were 10 lectures, 4 workshops, and 1 student poster session.
Overall, the spring school investigated language and music as neurocognitive systems and focused on a mechanistic approach exploring the neural substrates underlying musical, linguistic, social, and emotional processes and behaviors. In particular, researchers approached questions concerning cognitive processes, computational procedures, and neural mechanisms underlying the temporal organization of language and music, mainly from two perspectives: one was concerned with syntax or structural representations of language and music as neurocognitive systems (i.e., an intrapersonal perspective), while the other emphasized social interaction and emotions in their communicative function (i.e., an interpersonal perspective). The spring school not only acted as a platform for knowledge transfer and exchange but also generated a number of important research questions as challenges for future investigations
Oscillatory dynamics of perceptual to conceptual transformations in the ventral visual pathway
Object recognition requires dynamic transformations of low-level visual inputs to complex semantic representations. While this process depends on the ventral visual pathway (VVP), we lack an incremental account from low-level inputs to semantic representations, and the mechanistic details of these dynamics. Here we combine computational models of vision with semantics, and test the output of the incremental model against patterns of neural oscillations recorded with MEG in humans. Representational Similarity Analysis showed visual information was represented in alpha activity throughout the VVP, and semantic information was represented in theta activity. Furthermore, informational connectivity showed visual information travels through feedforward connections, while visual information is transformed into semantic representations through feedforward and feedback activity, centered on the anterior temporal lobe. Our research highlights that the complex transformations between visual and semantic information is driven by feedforward and recurrent dynamics resulting in object-specific semantics
The brain structure during language development: neural correlates of sentence comprehension in preschool children
Language skills increase as the brain matures and language specialization is linked to the left hemisphere. Among distinct language domains, sentence comprehension is particularly vital in language acquisition and, by comparison, requires a much longer time-span before full mastery in children. Although accumulating studies have revealed the neural mechanism underlying sentence comprehension acquisition, the development of the brainâs gray matter and its relation to sentence comprehension had not been fully understood.
This thesis employs structural magnetic resonance imaging and diffusion-weighted imaging data to investigate the neural correlates of sentence comprehension in preschoolers both cross-sectionally and longitudinally. The first study examines how cortical thick- ness covariance is relevant for syntax in preschoolers and changes across development. Results suggest that the cortical thickness covariance of brain regions relevant for syntax increases from preschoolers to adults, whilst preschoolers with superior language abilities show a more adult-like covariance pattern. Reconstructing the white matter fiber tract connecting the left inferior frontal and superior temporal cortices using diffusion-weighted imaging data, the second study suggests that the reduced cortical thickness covariance in the left frontotemporal regions is likely due to immature white matter connectivity during preschool. The third study then investigated the cortical thickness asymmetry and its relation to sentence comprehension abilities. Results show that longitudinal cortical thick- ness asymmetry in the inferior frontal cortex was associated with improvements in sentence comprehension, further suggesting the crucial role of the inferior frontal cortex for sentence comprehension acquisition.
Taken together, evidence from gray and white matter data provides new insights into the neuroscientific model of language acquisition and the emergence of syntactic processing during language development
Recommended from our members
Object-specific semantic coding in human perirhinal cortex.
Category-specificity has been demonstrated in the human posterior ventral temporal cortex for a variety of object categories. Although object representations within the ventral visual pathway must be sufficiently rich and complex to support the recognition of individual objects, little is known about how specific objects are represented. Here, we used representational similarity analysis to determine what different kinds of object information are reflected in fMRI activation patterns and uncover the relationship between categorical and object-specific semantic representations. Our results show a gradient of informational specificity along the ventral stream from representations of image-based visual properties in early visual cortex, to categorical representations in the posterior ventral stream. A key finding showed that object-specific semantic information is uniquely represented in the perirhinal cortex, which was also increasingly engaged for objects that are more semantically confusable. These findings suggest a key role for the perirhinal cortex in representing and processing object-specific semantic information that is more critical for highly confusable objects. Our findings extend current distributed models by showing coarse dissociations between objects in posterior ventral cortex, and fine-grained distinctions between objects supported by the anterior medial temporal lobes, including the perirhinal cortex, which serve to integrate complex object information.This work was supported by funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007--2013)/ ERC Grant agreement no. 249640 to L.K.T
Mapping visual symbols onto spoken language along the ventral visual stream.
Reading involves transforming arbitrary visual symbols into sounds and meanings. This study interrogated the neural representations in ventral occipitotemporal cortex (vOT) that support this transformation process. Twenty-four adults learned to read 2 sets of 24 novel words that shared phonemes and semantic categories but were written in different artificial orthographies. Following 2 wk of training, participants read the trained words while neural activity was measured with functional MRI. Representational similarity analysis on item pairs from the same orthography revealed that right vOT and posterior regions of left vOT were sensitive to basic visual similarity. Left vOT encoded letter identity and representations became more invariant to position along a posterior-to-anterior hierarchy. Item pairs that shared sounds or meanings, but were written in different orthographies with no letters in common, evoked similar neural patterns in anterior left vOT. These results reveal a hierarchical, posterior-to-anterior gradient in vOT, in which representations of letters become increasingly invariant to position and are transformed to convey spoken language information
- âŠ