529 research outputs found

    Oszillatorische Gamma-Band-AktivitÀt bei der Verarbeitung auditorischer Reize im KurzzeitgedÀchtnis im MEG

    Get PDF
    Recent studies have suggested an important role of cortical gamma oscillatory activity (30-100 Hz) as a correlate of encoding, maintaining and retrieving auditory, visual or tactile information in and from memory. It was shown that these cortical stimulus representations were modulated by attention processes. Gamma-band activity (GBA) occurred as an induced response peaking at approximately 200-300 ms after stimulus presentation. Induced cortical responses appear as non-phase-locked activity and are assumed to reflect active cortical processing rather than passive perception. Induced GBA peaking 200-300 ms after stimulus presentation has been assumed to reflect differences between experimental conditions containing various stimuli. By contrast, the relationship between specific oscillatory signals and the representation of individual stimuli has remained unclear. The present study aimed at the identification of such stimulus-specific gamma-band components. We used magnetoencephalography (MEG) to assess gamma activity during an auditory spatial delayed matching-to-sample task. 28 healthy adults were assigned to one of two groups R and L who were presented with only right- or left-lateralized sounds, respectively. Two sample stimuli S1 with lateralization angles of either 15° or 45° deviation from the midsagittal plane were used in each group. Participants had to memorize the lateralization angle of S1 and compare it to a second lateralized sound S2 presented after an 800-ms delay phase. S2 either had the same or a different lateralization angle as S1. After the presentation of S2, subjects had to indicate whether S1 and S2 matched or not. Statistical probability mapping was applied to the signals at sensor level to identify spectral amplitude differences between 15° and 45° stimuli. We found distinct gamma-band components reflecting each sample stimulus with center frequencies ranging between 59 and 72 Hz in different sensors over parieto-occipital cortex contralateral to the side of stimulation. These oscillations showed maximal spectral amplitudes during the middle 200-300 ms of the delay phase and decreased again towards its end. Additionally, we investigated correlations between the activation strength of the gamma-band components and memory task performance. The magnitude of differentiation between oscillatory components representing 'preferred' and 'nonpreferred' stimuli during the final 100 ms of the delay phase correlated positively with task performance. These findings suggest that the observed gamma-band components reflect the activity of neuronal networks tuned to specific auditory spatial stimulus features. The activation of these networks seems to contribute to the maintenance of task-relevant information in short-term memory.Ergebnisse aus aktuellen Studien legen nahe, dass kortikale oszillatorische AktivitĂ€t im Gamma-Bereich (30-100 Hz) eine wichtige Rolle fĂŒr verschiedene kognitive Prozesse spielt. Dazu zĂ€hlen das Kodieren, die Aufrechterhaltung und der Abruf auditorischer, visueller oder taktiler Informationen in das bzw. aus dem GedĂ€chtnis. Es konnte gezeigt werden, dass diese kortikale AktivitĂ€t durch Aufmerksamkeitsprozesse beeinflusst wird. Gamma-AktivitĂ€t trat bei vorangegangenen Untersuchungen als induzierte Antwort ca. 200-300 ms nach StimulusprĂ€sentation auf. Es wird angenommen, dass diese nicht phasengebundenen kortikalen Reizantworten aktive kortikale Verarbeitungs-prozesse widerspiegeln. In frĂŒheren Studien wurde induzierte Gamma-AktivitĂ€t wĂ€hrend der Aufrechterhaltung von Stimulusinformationen ĂŒber Regionen gefunden, die an der Verarbeitung aufgabenrelevanter Reizmerkmale beteiligt sind. Diese Antworten im Gamma-Bereich spiegelten Unterschiede zwischen verschieden experimentellen Bedingungen wider, jedoch ist wenig ĂŒber die ReprĂ€sentation spezifischer Stimuluseigenschaften durch Gamma-AktivitĂ€t bekannt. Mit der vorliegenden Studie haben wir versucht, solche stimulus spezifischen Gamma-Komponenten zu untersuchen. DafĂŒr verwendeten wir Magnetenzephalographie (MEG) und eine auditorische rĂ€umliche “delayed matching-to-sample“ Aufgabe. 28 gesunde Erwachsene wurden dabei zwei verschiedenen Gruppen zugeordnet. Gruppe R bekam rechtslateralisierte Stimuli prĂ€sentiert, wĂ€hrend diese in Gruppe L linkslateralisiert waren. Dabei unterschieden sich die Reize nur in ihrer rĂ€umlichen Charakteristik, die Klangmuster blieben unverĂ€ndert. In beiden Gruppen wurden zwei Beispielstimuli S1 mit Lateralisierungswinkeln von 15° bzw. 45° verwendet. Die Probanden mussten sich den Lateralisierungswinkel von S1 merken und anschließend mit einem zweiten Stimulus S2, der nach einer Verzögerungsphase von 800 ms prĂ€sentiert wurde, vergleichen. S2 hatte dabei entweder den gleichen Lateralisierungswinkel wie S1, oder unterschied sich darin von dem ersten Stimulus. Nach der PrĂ€sentation von S2 mussten die Probanden signalisieren, ob die Lateralisierungswinkel der beiden Stimuli ĂŒbereinstimmten oder nicht. Die Signale der einzelnen Sensoren wurden mit einem statistischen Wahrscheinlichkeitsmapping untersucht. Dabei wollten wir Unterschiede in der spektralen Amplitude fĂŒr Stimuli mit 15° bzw. 45° Lateralisierungswinkel identifizieren. Wir konnten spezifische Gamma-AktivitĂ€t fĂŒr alle Beispielstimuli nachweisen. Die Signale wurden im Bereich von 59-72 Hz gefunden und waren ĂŒber dem parieto-okzipitalen Kortex jeweils kontralateral zur stimulierten Seite lokalisiert. Die maximalen Spektralamplituden dieser Oszillationen traten wĂ€hrend der mittleren 200-300 ms der Verzögerungsphase auf und nahmen zu ihrem Ende hin ab. ZusĂ€tzlich haben wir Korrelationen zwischen der AktivierungsstĂ€rke der Gamma-Komponenten und dem Abschneiden bei der GedĂ€chtnisaufgabe untersucht. Dabei zeigte sich, dass der Unterschied der oszillatorischen Antworten auf bevorzugte und nicht-bevorzugte Stimuli wĂ€hrend der letzten 100 ms der Verzögerungsphase positiv mit der Leistung in der GedĂ€chtnisaufgabe korrelierte. Diese Ergebnisse sprechen dafĂŒr, dass die beobachteten Gamma Komponenten die AktivitĂ€t neuronaler Netzwerke, die auf die Verarbeitung rĂ€umlicher auditorischer Information spezialisiert sind, widerspiegeln. Die Aktivierung dieser Netzwerke scheint zur Aufrechterhaltung aufgabenbezogener Information im KurzzeitgedĂ€chtnis beizutragen

    Neural dynamics of selective attention to speech in noise

    Get PDF
    This thesis investigates how the neural system instantiates selective attention to speech in challenging acoustic conditions, such as spectral degradation and the presence of background noise. Four studies using behavioural measures, magneto- and electroencephalography (M/EEG) recordings were conducted in younger (20–30 years) and older participants (60–80 years). The overall results can be summarized as follows. An EEG experiment demonstrated that slow negative potentials reflect participants’ enhanced allocation of attention when they are faced with more degraded acoustics. This basic mechanism of attention allocation was preserved at an older age. A follow-up experiment in younger listeners indicated that attention allocation can be further enhanced in a context of increased task-relevance through monetary incentives. A subsequent study focused on brain oscillatory dynamics in a demanding speech comprehension task. The power of neural alpha oscillations (~10 Hz) reflected a decrease in demands on attention with increasing acoustic detail and critically also with increasing predictiveness of the upcoming speech content. Older listeners’ behavioural responses and alpha power dynamics were stronger affected by acoustic detail compared with younger listeners, indicating that selective attention at an older age is particularly dependent on the sensory input signal. An additional analysis of listeners’ neural phase-locking to the temporal envelopes of attended speech and unattended background speech revealed that younger and older listeners show a similar segregation of attended and unattended speech on a neural level. A dichotic listening experiment in the MEG aimed at investigating how neural alpha oscillations support selective attention to speech. Lateralized alpha power modulations in parietal and auditory cortex regions predicted listeners’ focus of attention (i.e., left vs right). This suggests that alpha oscillations implement an attentional filter mechanism to enhance the signal and to suppress noise. A final behavioural study asked whether acoustic and semantic aspects of task-irrelevant speech determine how much it interferes with attention to task-relevant speech. Results demonstrated that younger and older adults were more distracted when acoustic detail of irrelevant speech was enhanced, whereas predictiveness of irrelevant speech had no effect. All findings of this thesis are integrated in an initial framework for the role of attention for speech comprehension under demanding acoustic conditions

    Shaping memory consolidation via targeted memory reactivation during sleep

    Get PDF
    Recent studies have shown that the reactivation of specific memories during sleep can be modulated using external stimulation. Specifically, it has been reported that matching a sensory stimulus (e.g., odor or sound cue) with target information (e.g., pairs of words, pictures, and motor sequences) during wakefulness, and then presenting the cue alone during sleep, facilitates memory of the target information. Thus, presenting learned cues while asleep may reactivate related declarative, procedural, and emotional material, and facilitate the neurophysiological processes underpinning memory consolidation in humans. This paradigm, which has been named targeted memory reactivation, has been successfully used to improve visuospatial and verbal memories, strengthen motor skills, modify implicit social biases, and enhance fear extinction. However, these studies also show that results depend on the type of memory investigated, the task employed, the sensory cue used, and the specific sleep stage of stimulation. Here, we present a review of how memory consolidation may be shaped using noninvasive sensory stimulation during sleep

    The multisensory function of the human primary visual cortex

    Get PDF
    It has been nearly 10 years since Ghazanfar and Schroeder (2006) proposed that the neocortex is essentially multisensory in nature. However, it is only recently that sufficient and hard evidence that supports this proposal has accrued. We review evidence that activity within the human primary visual cortex plays an active role in multisensory processes and directly impacts behavioural outcome. This evidence emerges from a full pallet of human brain imaging and brain mapping methods with which multisensory processes are quantitatively assessed by taking advantage of particular strengths of each technique as well as advances in signal analyses. Several general conclusions about multisensory processes in primary visual cortex of humans are supported relatively solidly. First, haemodynamic methods (fMRI/PET) show that there is both convergence and integration occurring within primary visual cortex. Second, primary visual cortex is involved in multisensory processes during early post-stimulus stages (as revealed by EEG/ERP/ERFs as well as TMS). Third, multisensory effects in primary visual cortex directly impact behaviour and perception, as revealed by correlational (EEG/ERPs/ERFs) as well as more causal measures (TMS/tACS). While the provocative claim of Ghazanfar and Schroeder (2006) that the whole of neocortex is multisensory in function has yet to be demonstrated, this can now be considered established in the case of the human primary visual cortex

    Seeing a talking face matters to infants, children and adults : behavioural and neurophysiological studies

    Get PDF
    Everyday conversations typically occur face-to-face. Over and above auditory information, visual information from a speaker’s face, e.g., lips, eyebrows, contributes to speech perception and comprehension. The facilitation that visual speech cues bring— termed the visual speech benefit—are experienced by infants, children and adults. Even so, studies on speech perception have largely focused on auditory-only speech leaving a relative paucity of research on the visual speech benefit. Central to this thesis are the behavioural and neurophysiological manifestations of the visual speech benefit. As the visual speech benefit assumes that a listener is attending to a speaker’s talking face, the investigations are conducted in relation to the possible modulating effects that gaze behaviour brings. Three investigations were conducted. Collectively, these studies demonstrate that visual speech information facilitates speech perception, and this has implications for individuals who do not have clear access to the auditory speech signal. The results, for instance the enhancement of 5-month-olds’ cortical tracking by visual speech cues, and the effect of idiosyncratic differences in gaze behaviour on speech processing, expand knowledge of auditory-visual speech processing, and provide firm bases for new directions in this burgeoning and important area of research

    Being first matters: topographical representational similarity analysis of ERP signals reveals separate networks for audiovisual temporal binding depending on the leading sense

    Get PDF
    In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Inter-sensory timing is crucial in this process as only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window (TBW), revealing asymmetries in its size and plasticity depending on the leading input (auditory-visual, AV; visual-auditory, VA). We here tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV/VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV/VA event-related potentials (ERPs) from the sum of their unisensory constituents, we run a time-resolved topographical representational similarity analysis (tRSA) comparing AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between AV- and VA-maps at each time point (500ms window post-stimulus) and then correlated with two alternative similarity model matrices: AVmaps=VAmaps vs. AVmaps≠VAmaps. The tRSA results favored the AVmaps≠VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems

    EEG, MEG and neuromodulatory approaches to explore cognition: Current status and future directions

    Full text link
    Neural oscillations and their association with brain states and cognitive functions have been object of extensive investigation over the last decades. Several electroencephalography (EEG) and magnetoencephalography (MEG) analysis approaches have been explored and oscillatory properties have been identified, in parallel with the technical and computational advancement. This review provides an up-to-date account of how EEG/MEG oscillations have contributed to the understanding of cognition. Methodological challenges, recent developments and translational potential, along with future research avenues, are discussed. Keywords: Cognition; Electrophysiology; Event-related-potentials; Neural oscillations; Neural synchronisation; Neuromodulatio

    Investigating the Neural Basis of Audiovisual Speech Perception with Intracranial Recordings in Humans

    Get PDF
    Speech is inherently multisensory, containing auditory information from the voice and visual information from the mouth movements of the talker. Hearing the voice is usually sufficient to understand speech, however in noisy environments or when audition is impaired due to aging or disabilities, seeing mouth movements greatly improves speech perception. Although behavioral studies have well established this perceptual benefit, it is still not clear how the brain processes visual information from mouth movements to improve speech perception. To clarify this issue, I studied the neural activity recorded from the brain surfaces of human subjects using intracranial electrodes, a technique known as electrocorticography (ECoG). First, I studied responses to noisy speech in the auditory cortex, specifically in the superior temporal gyrus (STG). Previous studies identified the anterior parts of the STG as unisensory, responding only to auditory stimulus. On the other hand, posterior parts of the STG are known to be multisensory, responding to both auditory and visual stimuli, which makes it a key region for audiovisual speech perception. I examined how these different parts of the STG respond to clear versus noisy speech. I found that noisy speech decreased the amplitude and increased the across-trial variability of the response in the anterior STG. However, possibly due to its multisensory composition, posterior STG was not as sensitive to auditory noise as the anterior STG and responded similarly to clear and noisy speech. I also found that these two response patterns in the STG were separated by a sharp boundary demarcated by the posterior-most portion of the Heschl’s gyrus. Second, I studied responses to silent speech in the visual cortex. Previous studies demonstrated that visual cortex shows response enhancement when the auditory component of speech is noisy or absent, however it was not clear which regions of the visual cortex specifically show this response enhancement and whether this response enhancement is a result of top-down modulation from a higher region. To test this, I first mapped the receptive fields of different regions in the visual cortex and then measured their responses to visual (silent) and audiovisual speech stimuli. I found that visual regions that have central receptive fields show greater response enhancement to visual speech, possibly because these regions receive more visual information from mouth movements. I found similar response enhancement to visual speech in frontal cortex, specifically in the inferior frontal gyrus, premotor and dorsolateral prefrontal cortices, which have been implicated in speech reading in previous studies. I showed that these frontal regions display strong functional connectivity with visual regions that have central receptive fields during speech perception
    • 

    corecore