135 research outputs found

    Acoustic Detail But Not Predictability of Task-Irrelevant Speech Disrupts Working Memory

    Get PDF
    Attended speech is comprehended better not only if more acoustic detail is available, but also if it is semantically highly predictable. But can more acoustic detail or higher predictability turn into disadvantages and distract a listener if the speech signal is to be ignored? Also, does the degree of distraction increase for older listeners who typically show a decline in attentional control ability? Adopting the irrelevant-speech paradigm, we tested whether younger (age 23–33 years) and older (60–78 years) listeners’ working memory for the serial order of spoken digits would be disrupted by the presentation of task-irrelevant speech varying in its acoustic detail (using noise-vocoding) and its semantic predictability (of sentence endings). More acoustic detail, but not higher predictability, of task-irrelevant speech aggravated memory interference. This pattern of results did not differ between younger and older listeners, despite generally lower performance in older listeners. Our findings suggest that the focus of attention determines how acoustics and predictability affect the processing of speech: First, as more acoustic detail is known to enhance speech comprehension and memory for speech, we here demonstrate that more acoustic detail of ignored speech enhances the degree of distraction. Second, while higher predictability of attended speech is known to also enhance speech comprehension under acoustically adverse conditions, higher predictability of ignored speech is unable to exert any distracting effect upon working memory performance in younger or older listeners. These findings suggest that features that make attended speech easier to comprehend do not necessarily enhance distraction by ignored speech

    Perception of acoustically complex phonological features in vowels is reflected in the induced brain-magnetic activity

    Get PDF
    A central issue in speech recognition is which basic units of speech are extracted by the auditory system and used for lexical access. One suggestion is that complex acoustic-phonetic information is mapped onto abstract phonological representations of speech and that a finite set of phonological features is used to guide speech perception. Previous studies analyzing the N1m component of the auditory evoked field have shown that this holds for the acoustically simple feature place of articulation. Brain magnetic correlates indexing the extraction of acoustically more complex features, such as lip rounding (ROUND) in vowels, have not been unraveled yet. The present study uses magnetoencephalography (MEG) to describe the spatial-temporal neural dynamics underlying the extraction of phonological features. We examined the induced electromagnetic brain response to German vowels and found the event-related desynchronization in the upper beta-band to be prolonged for those vowels that exhibit the lip rounding feature (ROUND). It was the presence of that feature rather than circumscribed single acoustic parameters, such as their formant frequencies, which explained the differences between the experimental conditions. We conclude that the prolonged event-related desynchronization in the upper beta-band correlates with the computational effort for the extraction of acoustically complex phonological features from the speech signal. The results provide an additional biomagnetic parameter to study mechanisms of speech perception

    Attentional influences on functional mapping of speech sounds in human auditory cortex

    Get PDF
    BACKGROUND: The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects. RESULTS: During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. CONCLUSIONS: These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands

    Alpha Rhythms in Audition: Cognitive and Clinical Perspectives

    Get PDF
    Like the visual and the sensorimotor systems, the auditory system exhibits pronounced alpha-like resting oscillatory activity. Due to the relatively small spatial extent of auditory cortical areas, this rhythmic activity is less obvious and frequently masked by non-auditory alpha-generators when recording non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). Following stimulation with sounds, marked desynchronizations can be observed between 6 and 12 Hz, which can be localized to the auditory cortex. However knowledge about the functional relevance of the auditory alpha rhythm has remained scarce so far. Results from the visual and sensorimotor system have fuelled the hypothesis of alpha activity reflecting a state of functional inhibition. The current article pursues several intentions: (1) Firstly we review and present own evidence (MEG, EEG, sEEG) for the existence of an auditory alpha-like rhythm independent of visual or motor generators, something that is occasionally met with skepticism. (2) In a second part we will discuss tinnitus and how this audiological symptom may relate to reduced background alpha. The clinical part will give an introduction into a method which aims to modulate neurophysiological activity hypothesized to underlie this distressing disorder. Using neurofeedback, one is able to directly target relevant oscillatory activity. Preliminary data point to a high potential of this approach for treating tinnitus. (3) Finally, in a cognitive neuroscientific part we will show that auditory alpha is modulated by anticipation/expectations with and without auditory stimulation. We will also introduce ideas and initial evidence that alpha oscillations are involved in the most complex capability of the auditory system, namely speech perception. The evidence presented in this article corroborates findings from other modalities, indicating that alpha-like activity functionally has an universal inhibitory role across sensory modalities

    The representation of the verb's argument structure as disclosed by fMRI

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the composition of an event the verb's argument structure defines the number of participants and their relationships. Previous studies indicated distinct brain responses depending on how many obligatory arguments a verb takes. The present functional magnetic resonance imaging (fMRI) study served to verify the neural structures involved in the processing of German verbs with one (e.g. "snore") or three (e.g. "gives") argument structure. Within a silent reading design, verbs were presented either in isolation or with a minimal syntactic context ("snore" vs. "Peter snores").</p> <p>Results</p> <p>Reading of isolated one-argument verbs ("snore") produced stronger BOLD responses than three-argument verbs ("gives") in the inferior temporal fusiform gyrus (BA 37) of the left hemisphere, validating previous magnetoencephalographic findings. When presented in context one-argument verbs ("Peter snores") induced more pronounced activity in the inferior frontal gyrus (IFG) of the left hemisphere than three-argument verbs ("Peter gives").</p> <p>Conclusion</p> <p>In line with previous studies our results corroborate the left temporal lobe as site of representation and the IFG as site of processing of verbs' argument structure.</p

    Bir köylü

    Get PDF
    Şems'in Ahenk'te tefrika edilen Bir Köylü adlı romanıArşivdeki eksikler nedeniyle romanın tam metni verilememiştir. Bkz. Tefrika bilgi form

    Explaining flexible continuous speech comprehension from individual motor rhythms

    Get PDF
    When speech is too fast, the tracking of the acoustic signal along the auditory pathway deteriorates, leading to suboptimal speech segmentation and decoding of speech information. Thus, speech comprehension is limited by the temporal constraints of the auditory system. Here we ask whether individual differences in auditory-motor coupling strength in part shape these temporal constraints. In two behavioural experiments, we characterize individual differences in the comprehension of naturalistic speech as function of the individual synchronization between the auditory and motor systems and the preferred frequencies of the systems. Obviously, speech comprehension declined at higher speech rates. Importantly, however, both higher auditory-motor synchronization and higher spontaneous speech motor production rates were predictive of better speech-comprehension performance. Furthermore, performance increased with higher working memory capacity (Digit Span) and higher linguistic, model-based sentence predictability—particularly so at higher speech rates and for individuals with high auditory-motor synchronization. The data provide evidence for a model of speech comprehension in which individual flexibility of not only the motor system but also auditory-motor synchronization may play a modulatory role

    Frequency-Specific Effects in Infant Electroencephalograms Do Not Require Entrained Neural Oscillations: A Commentary on Köster et al. (2019)

    Get PDF
    First paragraph: In our current efforts to understand how psychological phenomena arise from brain activity, neural oscillations have taken center stage. A wide range of findings has linked modulations of oscillatory power, phase, and frequency to various cognitive functions, such as attention, language, and memory (Wang, 2010). Exciting new research has recently focused on the developmental origins and trajectories of neural oscillations—how does the neural oscillatory landscape emerge over development (Schaworonkow & Voytek, 2021), and how do the relationships between oscillations and cognitive function in the adult brain come about

    Integration of iconic gestures and speech in left superior temporal areas boosts speech comprehension under adverse listening conditions

    Get PDF
    Iconic gestures are spontaneous hand movements that illustrate certain contents of speech and, as such, are an important part of face-to-face communication. This experiment targets the brain bases of how iconic gestures and speech are integrated during comprehension. Areas of integration were identified on the basis of two classic properties of multimodal integration, bimodal enhancement and inverse effectiveness (i.e., greater enhancement for unimodally least effective stimuli). Participants underwent fMRI while being presented with videos of gesture-supported sentences as well as their unimodal components, which allowed us to identify areas showing bimodal enhancement. Additionally, we manipulated the signal-to-noise ratio of speech (either moderate or good) to probe for integration areas exhibiting the inverse effectiveness property. Bimodal enhancement was found at the posterior end of the superior temporal sulcus and adjacent superior temporal gyrus (pSTS/STG) in both hemispheres, indicating that the integration of iconic gestures and speech takes place in these areas. Furthermore, we found that the left pSTS/STG specifically showed a pattern of inverse effectiveness, i.e., the neural enhancement for bimodal stimulation was greater under adverse listening conditions. This indicates that activity in this area is boosted when an iconic gesture accompanies an utterance that is otherwise difficult to comprehend. The neural response paralleled the behavioral data observed. The present data extends results from previous gesture-speech integration studies in showing that pSTS/STG plays a key role in the facilitation of speech comprehension through simultaneous gestural input

    Brain regions essential for improved lexical access in an aged aphasic patient: a case report

    Get PDF
    BACKGROUND: The relationship between functional recovery after brain injury and concomitant neuroplastic changes is emphasized in recent research. In the present study we aimed to delineate brain regions essential for language performance in aphasia using functional magnetic resonance imaging and acquisition in a temporal sparse sampling procedure, which allows monitoring of overt verbal responses during scanning. CASE PRESENTATION: An 80-year old patient with chronic aphasia (2 years post-onset) was investigated before and after intensive language training using an overt picture naming task. Differential brain activation in the right inferior frontal gyrus for correct word retrieval and errors was found. Improved language performance following therapy was mirrored by increased fronto-thalamic activation while stability in more general measures of attention/concentration and working memory was assured. Three healthy age-matched control subjects did not show behavioral changes or increased activation when tested repeatedly within the same 2-week time interval. CONCLUSION: The results bear significance in that the changes in brain activation reported can unequivocally be attributed to the short-term training program and a language domain-specific plasticity process. Moreover, it further challenges the claim of a limited recovery potential in chronic aphasia, even at very old age. Delineation of brain regions essential for performance on a single case basis might have major implications for treatment using transcranial magnetic stimulation
    corecore