275 research outputs found

    Semantic radical consistency and character transparency effects in Chinese: an ERP study

    Get PDF
    BACKGROUND: This event-related potential (ERP) study aims to investigate the representation and temporal dynamics of Chinese orthography-to-semantics mappings by simultaneously manipulating character transparency and semantic radical consistency. Character components, referred to as radicals, make up the building blocks used dur...postprin

    Developmental refinement of cortical systems for speech and voice processing

    Get PDF
    Development typically leads to optimized and adaptive neural mechanisms for the processing of voice and speech. In this fMRI study we investigated how this adaptive processing reaches its mature efficiency by examining the effects of task, age and phonological skills on cortical responses to voice and speech in children (8-9years), adolescents (14-15years) and adults. Participants listened to vowels (/a/, /i/, /u/) spoken by different speakers (boy, girl, man) and performed delayed-match-to-sample tasks on vowel and speaker identity. Across age groups, similar behavioral accuracy and comparable sound evoked auditory cortical fMRI responses were observed. Analysis of task-related modulations indicated a developmental enhancement of responses in the (right) superior temporal cortex during the processing of speaker information. This effect was most evident through an analysis based on individually determined voice sensitive regions. Analysis of age effects indicated that the recruitment of regions in the temporal-parietal cortex and posterior cingulate/cingulate gyrus decreased with development. Beyond age-related changes, the strength of speech-evoked activity in left posterior and right middle superior temporal regions significantly scaled with individual differences in phonological skills. Together, these findings suggest a prolonged development of the cortical functional network for speech and voice processing. This development includes a progressive refinement of the neural mechanisms for the selection and analysis of auditory information relevant to the ongoing behavioral task

    Interaction of the effects associated with auditory-motor integration and attention-engaging listening tasks

    Get PDF
    A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.Peer reviewe

    Force Amplitude Modulation of Tongue and Hand Movements

    Get PDF
    Rapid, precise movements of the hand and tongue are necessary to complete a wide range of tasks in everyday life. However, the understanding of normal neural control of force production is limited, particularly for the tongue. Functional neuroimaging studies of incremental hand pressure production in healthy adults revealed scaled activations in the basal ganglia, but no imaging studies of tongue force regulation have been reported. The purposes of this study were (1) to identify the neural substrates controlling tongue force for speech and nonspeech tasks, (2) to determine which activations scaled to the magnitude of force produced, and (3) to assess whether positional modifications influenced maximum pressures and accuracy of pressure target matching for hand and tongue movements. Healthy older adults compressed small plastic bulbs in the oral cavity (for speech and nonspeech tasks) and in the hand at specified fractions of maximum voluntary contraction while magnetic resonance images were acquired. Volume of interest analysis at individual and group levels outlined a network of neural substrates controlling tongue speech and nonspeech movements. Repeated measures analysis revealed differences in percentage signal change and activation volume across task and effort level in some brain regions. Actual pressures and the accuracy of pressure matching were influenced by effort level in all tasks and body position in the hand squeeze task. The current results can serve as a basis of comparison for tongue movement control in individuals with neurological disease. Group differences in motor control mechanisms may help explain differential response of limb and tongue movements to medical interventions (as occurs in Parkinson disease) and ultimately may lead to more focused intervention for dysarthria in several conditions such as PD

    Sensorimotor Modulations by Cognitive Processes During Accurate Speech Discrimination: An EEG Investigation of Dorsal Stream Processing

    Get PDF
    Internal models mediate the transmission of information between anterior and posterior regions of the dorsal stream in support of speech perception, though it remains unclear how this mechanism responds to cognitive processes in service of task demands. The purpose of the current study was to identify the influences of attention and working memory on sensorimotor activity across the dorsal stream during speech discrimination, with set size and signal clarity employed to modulate stimulus predictability and the time course of increased task demands, respectively. Independent Component Analysis of 64–channel EEG data identified bilateral sensorimotor mu and auditory alpha components from a cohort of 42 participants, indexing activity from anterior (mu) and posterior (auditory) aspects of the dorsal stream. Time frequency (ERSP) analysis evaluated task-related changes in focal activation patterns with phase coherence measures employed to track patterns of information flow across the dorsal stream. ERSP decomposition of mu clusters revealed event-related desynchronization (ERD) in beta and alpha bands, which were interpreted as evidence of forward (beta) and inverse (alpha) internal modeling across the time course of perception events. Stronger pre-stimulus mu alpha ERD in small set discrimination tasks was interpreted as more efficient attentional allocation due to the reduced sensory search space enabled by predictable stimuli. Mu-alpha and mu-beta ERD in peri- and post-stimulus periods were interpreted within the framework of Analysis by Synthesis as evidence of working memory activity for stimulus processing and maintenance, with weaker activity in degraded conditions suggesting that covert rehearsal mechanisms are sensitive to the quality of the stimulus being retained in working memory. Similar ERSP patterns across conditions despite the differences in stimulus predictability and clarity, suggest that subjects may have adapted to tasks. In light of this, future studies of sensorimotor processing should consider the ecological validity of the tasks employed, as well as the larger cognitive environment in which tasks are performed. The absence of interpretable patterns of mu-auditory coherence modulation across the time course of speech discrimination highlights the need for more sensitive analyses to probe dorsal stream connectivity

    Neuronal underpinnings of stuttering

    Get PDF
    Fluent speech production depends on robust connections between brain regions that are crucial for auditory processing, motor planning and execution. The ability of the speech apparatus to produce effortless, continuous and uninterrupted flow of speech is compromised in people who stutter (PWS). Stuttering is a multifactorial speech fluency disorder that results in unintended occurrences of sound syllable repetitions, prolongations, and blocks, particularly on the initial part of words and sentences. Decades of research on the topic have produced an extensive amount of data but the mechanism behind the symptoms associated with stuttering is not clear. The aim of the present study was to investigate the neuronal basis of stuttering by looking at the brains neurochemistry utilizing the proton magnetic resonance spectroscopy (1H - MRS) technique. In particular, we looked at the neurotransmitters N-acetyl Aspartate (NAA), an aggregate of Glutamate and Glutamine (Glx) and myo-inositol (mI) as potential candidates for understanding the biochemical manifestations of stuttering. We have also collected behavioral data from the PWS group and correlated it with their spectroscopy results. Finally, we combined the measurements of neuronal activity behind speech production, probed with functional magnetic resonance imaging (fMRI), with 1H-MRS measurements in order to achieve information on the interaction between neuronal activation and underlying neurochemical function. The inferior frontal gyrus (IFG) was chosen as a target region for this investigation, given its' involvement in speech motor control. Neurotransmitter mI showed the main group effect. The cerebral metabolite pattern of PWS is characterized by the pronounced reduction in myo-inositol level in the IFG. Myo- inositol is considered a glial marker and its concentration may reflect the condition of myelin in the brain. The myelination process is referred to as the maturation process of the fibers that facilitates rapid neural innervation of speech muscles underlying speech fluency. Hence, given the existing literature on the topic and our main findings we suggested that delayed or impaired myelination of the speech-related neuronal network in the postnatal period might be responsible for the later development of stuttering.Flytende tale er avhengig av solide forbindelser mellom hjerneområder involvert i auditorisk prosessering, motorisk planlegging og utførelse. Taleapparatets evne til uanstrengt å produsere flytende uforstyrret tale er forstyrret hos personer som stammer (PWS). Stamming er en sammensatt forstyrrelse av taleflyt som resulterer i ufrivillige gjentagelser av stavelser, utvidelser, og blokkeringer, spesielt i begynnelsen av ord og setninger. Gitt tiår med forskning på området er det ennå ikke klart hvilke mekanismer som ligger til grunn for stammingen. Hensikten med dette studiet har vært å utforske det nevrale grunnlaget til stamming ved å se på hjernens nevrokjemi ved å ta i bruk proton-magnetisk resonsansspektroskopi (1H-MRS) teknikk. Vi har sett på om nevrotransmitterene: N-acetyl Asparatate (NAA); glutamat og glutamin (Glx) og myo-inositol kan bidra til forståelsen av de biokjemiske manifestasjonene av stamming. Vi har også samlet inn atferdsdata fra PWS-gruppen og korrelert dette med spektroskopi-dataen. Til slutt kombinerte vi målingene av den nevral aktiviteten av taleproduksjon med 1H-MRS målingene for å se på interaksjon mellom nevral aktivering og underliggende nevrokjemisk funksjon. Inferior frontal gyrus (IFG) var målområdet for undersøkelsen, siden området er viktig for motorisk kontroll av tale. Nevrotransmitteren myo-inositol viste en hovedgruppeeffekt. Metabolittene i hjernen til personer som stammer var karakterisert av en tydelig reduksjon i nivå av myo-inositol i IFG. Myo-inositol er ansett som en glial markør, og dets konsentrasjon kan muligens fortelle om myelinets tilstand i hjernen. Myelineringsprosessen av nerveceller er en modningsprosess som fasiliterer rask signaloverføring fra hjernen til muskelfibrene involvert i tale. Vi foreslår derfor på bakgrunn av foreliggende litteratur på området og våre resultater at forsinket eller hemmet myelinering av tale-relaterte nevrale nettverk i spedbarnsperioden kan føre til senere utvikling av stamming.LOGO345MAPS-LOG0

    Leveraging Spatiotemporal Relationships of High-frequency Activation in Human Electrocorticographic Recordings for Speech Brain-Computer-Interface

    Get PDF
    Speech production is one of the most intricate yet natural human behaviors and is most keenly appreciated when it becomes difficult or impossible; as is the case for patients suffering from locked-in syndrome. Burgeoning understanding of the various cortical representations of language has brought into question the viability of a speech neuroprosthesis using implanted electrodes. The temporal resolution of intracranial electrophysiological recordings, frequently billed as a great asset of electrocorticography (ECoG), has actually been a hindrance as speech decoders have struggled to take advantage of this timing information. There have been few demonstrations of how well a speech neuroprosthesis will realistically generalize across contexts when constructed using causal feature extraction and language models that can be applied and adapted in real-time. The research detailed in this dissertation aims primarily to characterize the spatiotemporal relationships of high frequency activity across ECoG arrays during word production. Once identified, these relationships map to motor and semantic representations of speech through the use of algorithms and classifiers that rapidly quantify these relationships in single-trials. The primary hypothesis put forward by this dissertation is that the onset, duration and temporal profile of high frequency activity in ECoG recordings is a useful feature for speech decoding. These features have rarely been used in state-of-the-art speech decoders, which tend to produce output from instantaneous high frequency power across cortical sites, or rely upon precise behavioral time-locking to take advantage of high frequency activity at several time-points relative to behavioral onset times. This hypothesis was examined in three separate studies. First, software was created that rapidly characterizes spatiotemporal relationships of neural features. Second, semantic representations of speech were examined using these spatiotemporal features. Finally, utterances were discriminated in single-trials with low latency and high accuracy using spatiotemporal matched filters in a neural keyword-spotting paradigm. Outcomes from this dissertation inform implant placement for a human speech prosthesis and provide the scientific and methodological basis to motivate further research of an implant specifically for speech-based brain-computer-interfaces

    The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study

    Get PDF
    Carminati MN, Knoeferle P. The Processing of Emotional Sentences by Young and Older Adults: A Visual World Eye-movement Study. Presented at the Architectures and Mechanisms of Language and Processing (AMLaP), Riva del Garda, Italy

    The cognitive and neural mechanisms involved in motor imagery of speech

    Get PDF
    Inner speech is a common phenomenon that influences motivation, problem-solving and self-awareness. Motor imagery of speech refers to the simulation of speech that gives rise to the experience of inner speech. Substantial evidence exists that several cortical areas are recruited in general motor imagery processes, including visual and speech motor imagery, but the evidence for primary motor cortex involvement is less clear. One influential model proposes that motor cortex is recruited during speech motor imagery, while another prominent model suggests motor cortex is bypassed. This thesis presents six experiments that explore the role of motor cortex in speech motor imagery. Experiments 1-3 build on established visual motor imagery tasks and expand these tasks to the speech motor imagery domain for the first time, using behavioural (experiments 1 and 2) and neuroimaging methods (experiment 3). Experiment 4 uses transcranial magnetic stimulation to explore motor cortex recruitment during a speech imagery condition, relative to a motor execution and baseline condition in hand and lip muscles. Experiments 5 and 6 use transcranial magnetic stimulation to explore speech motor imagery in tongue muscles relative to a hearing and a baseline condition. The results show that recruitment of motor cortex during speech motor imagery is modulated depending on task demands: simple speech stimuli do not recruit motor cortex, while complex speech stimuli are more likely to do so. The results have consequences specifically for models that always or never implicate motor cortex: it appears that complex stimuli require more active simulation than simple stimuli. In turn, the results suggest that complex inner speech experiences are linked to motor cortex recruitment. These findings have important ramifications for atypical populations whose inner speech experience may be impaired, such as those who experience auditory verbal hallucinations, or those with autism spectrum disorder
    corecore