72 research outputs found

    Functional role of delta and theta band oscillations for auditory feedback processing during vocal pitch motor control

    Get PDF
    The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM), relative pitch (RP) and absolute pitch (AP) musicians maintained vocalizations of a vowel sound and received randomized ±100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked) fronto-central activation within the theta band (5-8 Hz) that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced) frontal activation within the delta band (1-4 Hz) that emerged at approximately 1 second after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE), indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control

    Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback.</p> <p>Results</p> <p>Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli.</p> <p>Conclusions</p> <p>Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.</p

    The effects of stimulus complexity on the preattentive processing of self-generated and nonself voices: an ERP study

    Get PDF
    The ability to differentiate one's own voice from the voice of somebody else plays a critical role in successful verbal self-monitoring processes and in communication. However, most of the existing studies have only focused on the sensory correlates of self-generated voice processing, whereas the effects of attentional demands and stimulus complexity on self-generated voice processing remain largely unknown. In this study, we investigated the effects of stimulus complexity on the preattentive processing of self and nonself voice stimuli. Event-related potentials (ERPs) were recorded from 17 healthy males who watched a silent movie while ignoring prerecorded self-generated (SGV) and nonself (NSV) voice stimuli, consisting of a vocalization (vocalization category condition: VCC) or of a disyllabic word (word category condition: WCC). All voice stimuli were presented as standard and deviant events in four distinct oddball sequences. The mismatch negativity (MMN) ERP component peaked earlier for NSV than for SGV stimuli. Moreover, when compared with SGV stimuli, the P3a amplitude was increased for NSV stimuli in the VCC only, whereas in the WCC no significant differences were found between the two voice types. These findings suggest differences in the time course of automatic detection of a change in voice identity. In addition, they suggest that stimulus complexity modulates the magnitude of the orienting response to SGV and NSV stimuli, extending previous findings on self-voice processing.This work was supported by Grant Numbers IF/00334/2012, PTDC/PSI-PCL/116626/2010, and PTDC/MHN-PCN/3606/2012, funded by the Fundacao para a Ciencia e a Tecnologia (FCT, Portugal) and the Fundo Europeu de Desenvolvimento Regional through the European programs Quadro de Referencia Estrategico Nacional and Programa Operacional Factores de Competitividade, awarded to A.P.P., and by FCT Doctoral Grant Number SFRH/BD/77681/2011, awarded to T.C.info:eu-repo/semantics/publishedVersio

    Voice-selective prediction alterations in nonclinical voice hearers

    Get PDF
    Auditory verbal hallucinations (AVH) are a cardinal symptom of psychosis but also occur in 6-13% of the general population. Voice perception is thought to engage an internal forward model that generates predictions, preparing the auditory cortex for upcoming sensory feedback. Impaired processing of sensory feedback in vocalization seems to underlie the experience of AVH in psychosis, but whether this is the case in nonclinical voice hearers remains unclear. The current study used electroencephalography (EEG) to investigate whether and how hallucination predisposition (HP) modulates the internal forward model in response to self-initiated tones and self-voices. Participants varying in HP (based on the Launay-Slade Hallucination Scale) listened to self-generated and externally generated tones or self-voices. HP did not affect responses to self vs. externally generated tones. However, HP altered the processing of the self-generated voice: increased HP was associated with increased pre-stimulus alpha power and increased N1 response to the self-generated voice. HP did not affect the P2 response to voices. These findings confirm that both prediction and comparison of predicted and perceived feedback to a self-generated voice are altered in individuals with AVH predisposition. Specific alterations in the processing of self-generated vocalizations may establish a core feature of the psychosis continuum.The Authors gratefully acknowledge all the participants who collaborated in the study, and particularly Dr. Franziska Knolle for feedback on stimulus generation, Carla Barros for help with scripts for EEG time-frequency analysis, and Dr. Celia Moreira for her advice on mixed linear models. This work was supported by the Portuguese Science National Foundation (FCT; grant numbers PTDC/PSI-PCL/116626/2010, IF/00334/2012, PTDC/MHCPCN/0101/2014) awarded to APP

    Human Auditory Cortical Activation during Self-Vocalization

    Get PDF
    During speaking, auditory feedback is used to adjust vocalizations. The brain systems mediating this integrative ability have been investigated using a wide range of experimental strategies. In this report we examined how vocalization alters speech-sound processing within auditory cortex by directly recording evoked responses to vocalizations and playback stimuli using intracranial electrodes implanted in neurosurgery patients. Several new findings resulted from these high-resolution invasive recordings in human subjects. Suppressive effects of vocalization were found to occur only within circumscribed areas of auditory cortex. In addition, at a smaller number of sites, the opposite pattern was seen; cortical responses were enhanced during vocalization. This increase in activity was reflected in high gamma power changes, but was not evident in the averaged evoked potential waveforms. These new findings support forward models for vocal control in which efference copies of premotor cortex activity modulate sub-regions of auditory cortex

    The role of the cerebellum in adaptation: ALE meta‐analyses on sensory feedback error

    Get PDF
    It is widely accepted that unexpected sensory consequences of self‐action engage the cerebellum. However, we currently lack consensus on where in the cerebellum, we find fine‐grained differentiation to unexpected sensory feedback. This may result from methodological diversity in task‐based human neuroimaging studies that experimentally alter the quality of self‐generated sensory feedback. We gathered existing studies that manipulated sensory feedback using a variety of methodological approaches and performed activation likelihood estimation (ALE) meta‐analyses. Only half of these studies reported cerebellar activation with considerable variation in spatial location. Consequently, ALE analyses did not reveal significantly increased likelihood of activation in the cerebellum despite the broad scientific consensus of the cerebellum's involvement. In light of the high degree of methodological variability in published studies, we tested for statistical dependence between methodological factors that varied across the published studies. Experiments that elicited an adaptive response to continuously altered sensory feedback more frequently reported activation in the cerebellum than those experiments that did not induce adaptation. These findings may explain the surprisingly low rate of significant cerebellar activation across brain imaging studies investigating unexpected sensory feedback. Furthermore, limitations of functional magnetic resonance imaging to probe the cerebellum could play a role as climbing fiber activity associated with feedback error processing may not be captured by it. We provide methodological recommendations that may guide future studies

    Auditory event-related potentials

    Get PDF
    Auditory event related potentials are electric potentials (AERP, AEP) and magnetic fields (AEF) generated by the synchronous activity of large neural populations in the brain, which are time-locked to some actual or expected sound event

    Local discriminant wavelet packet basis for voice pathology classification

    No full text
    Diagnosis of pathological voice is one of the most important issues in biomedical applications of speech technology. There are some approaches for separating pathological from normal voice signals but a few ones are sophisticated to separate two or more kinds of speech pathologies from each other. This paper introduces an algorithm to discriminate voice pathologies signals from each other via adaptive growth of wavelet packet tree, based on the criterion of local discriminant bases (LDB). Moreover, genetic algorithm is employed for selecting the best feature set and Support Vector Machines as classifier to obtain as much as possible better results. To evaluate the proposed approach, we apply our algorithm to separate polyp from some other pathologies like keratosis leukoplakia, adductor spasmodic dysphonia and etc. Experimental results show the superior performance of this combinational approach against its incomplete versions, i.e. in the case of separating polyp and nodule, the proposed approach leads to 85% performance against 80% for where only complete wavelet packet features without applying GA algorithm are use
    • 

    corecore