929 research outputs found

    Semantic context and visual feature effects in object naming: an fMRI study using arterial spin labeling

    Get PDF
    Previous behavioral studies reported a robust effect of increased naming latencies when objects to be named were blocked within semantic category, compared to items blocked between category. This semantic context effect has been attributed to various mechanisms including inhibition or excitation of lexico-semantic representations and incremental learning of associations between semantic features and names, and is hypothesized to increase demands on verbal self-monitoring during speech production. Objects within categories also share many visual structural features, introducing a potential confound when interpreting the level at which the context effect might occur. Consistent with previous findings, we report a significant increase in response latencies when naming categorically related objects within blocks, an effect associated with increased perfusion fMRI signal bilaterally in the hippocampus and in the left middle to posterior superior temporal cortex. No perfusion changes were observed in the middle section of the left middle temporal cortex, a region associated with retrieval of lexical–semantic information in previous object naming studies. Although a manipulation of visual feature similarity did not influence naming latencies, we observed perfusion increases in the perirhinal cortex for naming objects with similar visual features that interacted with the semantic context in which objects were named. These results provide support for the view that the semantic context effect in object naming occurs due to an incremental learning mechanism, and involves increased demands on verbal self-monitoring

    Spoken language processing: piecing together the puzzle

    No full text
    Attempting to understand the fundamental mechanisms underlying spoken language processing, whether it is viewed as behaviour exhibited by human beings or as a faculty simulated by machines, is one of the greatest scientific challenges of our age. Despite tremendous achievements over the past 50 or so years, there is still a long way to go before we reach a comprehensive explanation of human spoken language behaviour and can create a technology with performance approaching or exceeding that of a human being. It is argued that progress is hampered by the fragmentation of the field across many different disciplines, coupled with a failure to create an integrated view of the fundamental mechanisms that underpin one organism's ability to communicate with another. This paper weaves together accounts from a wide variety of different disciplines concerned with the behaviour of living systems - many of them outside the normal realms of spoken language - and compiles them into a new model: PRESENCE (PREdictive SENsorimotor Control and Emulation). It is hoped that the results of this research will provide a sufficient glimpse into the future to give breath to a new generation of research into spoken language processing by mind or machine. (c) 2007 Elsevier B.V. All rights reserved

    Towards a new model of verbal monitoring

    Get PDF
    As all human activities, verbal communication is fraught with errors. It is estimated that humans produce around 16,000 words per day, but the word that is selected for production is not always correct and neither is the articulation always flawless. However, to facilitate communication, it is important to limit the number of errors. This is accomplished via the verbal monitoring mechanism. A body of research over the last century has uncovered a number of properties of the mechanisms at work during verbal monitoring. Over a dozen routes for verbal monitoring have been postulated. However, to date a complete account of verbal monitoring does not exist. In the current paper we first outline the properties of verbal monitoring that have been empirically demonstrated. This is followed by a discussion of current verbal monitoring models: the perceptual loop theory, conflict monitoring, the hierarchical state feedback control model, and the forward model theory. Each of these models is evaluated given empirical findings and theoretical considerations. We then outline lacunae of current theories, which we address with a proposal for a new model of verbal monitoring for production and perception, based on conflict monitoring models. Additionally, this novel model suggests a mechanism of how a detected error leads to a correction. The error resolution mechanism proposed in our new model is then tested in a computational model. Finally, we outline the advances and predictions of the model

    Expectancy changes the self-monitoring of voice identity

    Get PDF
    Self‐voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice‐morphing (self‐other) to manipulate (un‐)certainty in self‐voice attribution in a button‐press paradigm. This allowed investigating how levels of self‐voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self‐voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self‐voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self‐generated compared to a passively‐heard voice, the putative role of this region in detecting unexpected self‐voice changes during action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self‐voice feedback in auditory verbal hallucinations

    Expectancy changes the self-monitoring of voice identity

    Get PDF
    Self-voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice-morphing (self-other) to manipulate (un-)certainty in self-voice attribution in a button-press paradigm. This allowed investigating how levels of self-voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self-voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self-voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self-generated compared to a passively heard voice, the putative role of this region in detecting unexpected self-voice changes during the action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self-voice feedback in auditory verbal hallucinations

    Inner Speech's Relationship With Overt Speech in Poststroke Aphasia

    Get PDF
    PURPOSE: Relatively preserved inner speech alongside poor overt speech has been documented in some persons with aphasia (PWA), but the relationship of overt speech with inner speech is still largely unclear, as few studies have directly investigated these factors. The present study investigates the relationship of relatively preserved inner speech in aphasia with selected measures of language and cognition. METHODS: Thirty-eight persons with chronic aphasia (27 men, 11 women; average age 64.53 ± 13.29 years, time since stroke 8–111 months) were classified as having relatively preserved inner and overt speech (n = 21), relatively preserved inner speech with poor overt speech (n = 8), or not classified due to insufficient measurements of inner and/or overt speech (n = 9). Inner speech scores (by group) were correlated with selected measures of language and cognition from the Comprehensive Aphasia Test (Swinburn, Porter, & Al, 2004). RESULTS: The group with poor overt speech showed a significant relationship of inner speech with overt naming (r = .95, p < .01) and with mean length of utterance produced during a written picture description (r = .96, p < .01). Correlations between inner speech and language and cognition factors were not significant for the group with relatively good overt speech. CONCLUSIONS: As in previous research, we show that relatively preserved inner speech is found alongside otherwise severe production deficits in PWA. PWA with poor overt speech may rely more on preserved inner speech for overt picture naming (perhaps due to shared resources with verbal working memory) and for written picture description (perhaps due to reliance on inner speech due to perceived task difficulty). Assessments of inner speech may be useful as a standard component of aphasia screening, and therapy focused on improving and using inner speech may prove clinically worthwhile

    The linguistic and cognitive mechanisms underlying language tests in healthy adults : a principal component analysis

    Full text link
    Pour un processus d’évaluation linguistique plus prĂ©cis et rapide, il est important d’identifier les mĂ©canismes cognitifs qui soutiennent des tĂąches langagiĂšres couramment utilisĂ©es. Une façon de mieux comprendre ses mĂ©canismes est d’explorer la variance partagĂ©e entre les tĂąches linguistiques en utilisant l’analyse factorielle exploratoire. Peu d’études ont employĂ© cette mĂ©thode pour Ă©tudier ces mĂ©canismes dans le fonctionnement normal du langage. Par consĂ©quent, notre objectif principal est d’explorer comment un ensemble de tĂąches linguistiques se regroupent afin d’étudier les mĂ©canismes cognitifs sous-jacents de ses tĂąches. Nous avons Ă©valuĂ© 201 participants en bonne santĂ© ĂągĂ©s entre 18 et 75 ans (moyenne=45,29, Ă©cart-type= 15,06) et avec une scolaritĂ© entre 5 et 23 ans (moyenne=11,10, Ă©cart-type=4,68), parmi ceux-ci, 62,87% Ă©taient des femmes. Nous avons employĂ© deux batteries linguistiques : le Protocole d’examen linguistique de l’aphasie MontrĂ©al-Toulouse et Protocole MontrĂ©al d’Évaluation de la Communication – version abrĂ©gĂ©. Utilisant l’analyse en composantes principales avec une rotation Direct-oblimin, nous avons dĂ©couvert quatre composantes du langage : la sĂ©mantique picturale (tĂąches de comprĂ©hension orale, dĂ©nomination orale et dĂ©nomination Ă©crite), l'exĂ©cutif linguistique (tĂąches d’évocation lexicale - critĂšres sĂ©mantique, orthographique et libre), le transcodage et la sĂ©mantique (tĂąches de lecture, dictĂ©e et de jugement sĂ©mantique) et la pragmatique (tĂąches d'interprĂ©tation d'actes de parole indirecte et d'interprĂ©tation de mĂ©taphores). Ces quatre composantes expliquent 59,64 % de la variance totale. DeuxiĂšmement, nous avons vĂ©rifiĂ© l'association entre ces composantes et deux mesures des fonctions exĂ©cutives dans un sous-ensemble de 33 participants. La performance de la flexibilitĂ© cognitive a Ă©tĂ© Ă©valuĂ©e en soustrayant le - temps A au temps B du Trail Making Test et celle de la mĂ©moire de travail en prenant le total des rĂ©ponses correctes au test du n-back. La composante exĂ©cutive linguistique Ă©tait associĂ©e Ă  une meilleure flexibilitĂ© cognitive (r=-0,355) et la composante transcodage et sĂ©mantique Ă  une meilleure performance de mĂ©moire de travail (r=.0,397). Nos rĂ©sultats confirment l’hĂ©tĂ©rogĂ©nĂ©itĂ© des processus sous-jacent aux tĂąches langagiĂšres et leur relation intrinsĂšque avec d'autres composantes cognitives, tels que les fonctions exĂ©cutives.To a more accurate and time-efficient language assessment process, it is important to identify the cognitive mechanisms that sustain commonly used language tasks. One way to do so is to explore the shared variance across language tasks using the technique of principal components analysis. Few studies applied this technique to investigate these mechanisms in normal language functioning. Therefore, our main goal is to explore how a set of language tasks are going to group to investigate the underlying cognitive mechanisms of commonly used tasks. We assessed 201 healthy participants aged between 18 and 75 years old (mean = 45.29, SD = 15.06) and with a formal education between 5 and 23 years (mean = 11.10, SD =4.68), of these 62.87% were female. We used two language batteries: the Montreal-Toulouse language assessment battery and the Montreal Communication Evaluation Battery – brief version. Using a Principal Component Analysis with a Direct-oblimin rotation, we discovered four language components: pictorial semantics (auditory comprehension, naming and writing naming tasks), language-executive (unconstrained, semantic, and phonological verbal fluency tasks), transcoding and semantics (reading, dictation, and semantic judgment tasks), and pragmatics (indirect speech acts interpretation and metaphors interpretation tasks). These four components explained 59.64% of the total variance. Secondarily, we sought to verify the association between these components with two executive measures in a subset of 33 participants. Cognitive flexibility was assessed by the time B-time A score of the Trail Making Test and working memory by the total of correct answers on the n-back test. The language-executive component was associated with a better cognitive flexibility score (r=-.355) and the transcoding and semantics one with a better working memory performance (r=.397). Our findings confirm the heterogeneity process underlying language tasks and their intrinsic relationship to other cognitive components, such as executive functions

    The effect of articulation and word-meaning on gait and balance in people with Parkinson’s disease

    Get PDF
    Performing two tasks simultaneously is ubiquitous in everyday life, and the resulting interference may degrade performance on one or both of the tasks. This is potentially important, as diminished performance of a postural task places an individual at a greater risk for falling, especially in a movement impaired population such as individuals with Parkinson’s disease (PD). Many secondary tasks have been shown to reduce the performance of gait and balance, but to date only one study has investigated the effects of a verbal secondary task that systematically controls articulatory, cognitive, and linguistic demands. Previous research suggested that these components have independent effects on gait and balance within a sample of healthy young adults. The purpose of the present study was to replicate this previous research, within a sample of healthy older adults (n=20) and a sample of individuals with PD (n=20), and to evaluate the effects of individual differences in information processing speed, on dual-task interference. Results suggested that oral-motor movement significantly affected parameters of gait and balance, with men displaying significantly more dual-task interference than women. The addition of speech and lexicality to the secondary task did not significantly increase interference during the gait or balance protocol. Results also indicated that dual-task interference is directly related to individual differences in information processing speed, a finding that supports the capacity-sharing model of dual task interference

    At the interface: Dynamic interactions of explicit and implicit language knowledge.

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/139748/1/AttheInterface.pd
    • 

    corecore