41 research outputs found

    Inducing Task-Relevant Responses to Speech in the Sleeping Brain

    Get PDF
    Falling asleep leads to a loss of sensory awareness and to the inability to interact with the environment [1]. While this was traditionally thought as a consequence of the brain shutting down to external inputs, it is now acknowledged that incoming stimuli can still be processed, at least to some extent, during sleep [2]. For instance, sleeping participants can create novel sensory associations between tones and odors [3] or reactivate existing semantic associations, as evidenced by event-related potentials [4; 5; 6 ; 7]. Yet, the extent to which the brain continues to process external stimuli remains largely unknown. In particular, it remains unclear whether sensory information can be processed in a flexible and task-dependent manner by the sleeping brain, all the way up to the preparation of relevant actions. Here, using semantic categorization and lexical decision tasks, we studied task-relevant responses triggered by spoken stimuli in the sleeping brain. Awake participants classified words as either animals or objects (experiment 1) or as either words or pseudowords (experiment 2) by pressing a button with their right or left hand, while transitioning toward sleep. The lateralized readiness potential (LRP), an electrophysiological index of response preparation, revealed that task-specific preparatory responses are preserved during sleep. These findings demonstrate that despite the absence of awareness and behavioral responsiveness, sleepers can still extract task-relevant information from external stimuli and covertly prepare for appropriate motor responses

    Les attentes augmente la reconstruction des aspects auditifs dans les réponses électrophysiologiques à la parole bruitée

    Full text link
    peer reviewedOnline speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual "pop-out" phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions

    A Thalamocortical Neural Mass Model of the EEG during NREM Sleep and Its Response to Auditory Stimulation

    Get PDF
    Few models exist that accurately reproduce the complex rhythms of the thalamocortical system that are apparent in measured scalp EEG and at the same time, are suitable for large-scale simulations of brain activity. Here, we present a neural mass model of the thalamocortical system during natural non-REM sleep, which is able to generate fast sleep spindles (12–15 Hz), slow oscillations (<1 Hz) and K-complexes, as well as their distinct temporal relations, and response to auditory stimuli. We show that with the inclusion of detailed calcium currents, the thalamic neural mass model is able to generate different firing modes, and validate the model with EEG-data from a recent sleep study in humans, where closed-loop auditory stimulation was applied. The model output relates directly to the EEG, which makes it a useful basis to develop new stimulation protocols

    Le cerveau dormant au travail : traitement et apprentissage perceptifs durant le sommeil chez l'Homme

    No full text
    Every night we fall asleep and every morning we wake up. From what happens in the meantime, little is remembered. Others may say that we have moved, talked, laughed orcried, that the strongest and most vivid emotions took control of our body without leaving the faintest memory behind. Or others may have moved, talked, laughed or cried without our slightest notice. On the contrary, we can emerge from the most fantastic adventure in a quiet bed, cradled by a peaceable ticking clock. Without causing us much alarm, it seems that sleep entails a dissociation between what happens in ourenvironment and within our mind. Yet, at any moment, we can wake up and immediately regain consciousness of the surrounding world. Interestingly, it seems that certain sounds are more likely to awake us than others.Thus, are we completely disconnected from our environment when we sleep?Tous les soirs, nous nous endormons; tous les matins, nous nous réveillons. De ce qui advient entre temps nous gardons peu de souvenirs. Les personnes qui nous entourent pourraient nous dire que nous avons bougé, parlé, ri ou crié, que les émotions les plus vives ont pris le contrôle de notre corps sans pour autant avoir laissé le moindre souvenir. Ou encore, les personnes qui nous entourent ont pu bouger, parler, rire ou crier sans que nous nous en rendîmes compte le moins du monde. Ou au contraire, nous pouvons émerger de la plus fantastique des aventures dans un lit pourtant bien calme,bercé par le calme tic-tac de l’horloge. Il semble que le sommeil opère une dissociation complète entre ce qui arrive dans notre environnement immédiat et dans notre esprit,sans pour autant que la chose éveille en nous la moindre alarme. À tout moment qui plus est, nous pouvons nous réveiller et reprendre conscience de notre environnement de façon quasi instantanée. Curieusement, il semble que certains sons aient une plus grande facilité à nous réveiller que d’autres. Sommes-nous donc complètement déconnectés de notre environnement quand nous dormons

    The Dreamscape Project: Mapping the phenomenological and neurophysiological features of subjective experience during sleep

    No full text
    We spend a third of our lives asleep, and sleep teems with vivid dreams. While most people rarely remember their dreams, laboratory studies using timed awakenings show that healthy adults experience multiple dreams per night. Dreams have been hypothesised to play a key role in the development of consciousness and cognition1,2. Dreams are believed to contribute to sleep-related memory consolidation3 and emotional processing4, maintaining a healthy emotional balance and influencing waking mood and well-being5. Dreams are also believed to change in response to real-life events, which in turn can impact sleep quality as well as mood and cognitive functioning in wakefulness. The emotional significance and apparent meaningfulness of dreams has fuelled fascination with dreams for centuries. The flurry of interest in social and print media and numerous scientific studies, including CI Windt’s own work, on how the COVID-19 pandemic has influenced our dream lives is just the latest testament to this fact6,7

    Perceptual learning of acoustic noise generates memory-evoked potentials

    Get PDF
    SummaryExperience continuously imprints on the brain at all stages of life. The traces it leaves behind can produce perceptual learning [1], which drives adaptive behavior to previously encountered stimuli. Recently, it has been shown that even random noise, a type of sound devoid of acoustic structure, can trigger fast and robust perceptual learning after repeated exposure [2]. Here, by combining psychophysics, electroencephalography (EEG), and modeling, we show that the perceptual learning of noise is associated with evoked potentials, without any salient physical discontinuity or obvious acoustic landmark in the sound. Rather, the potentials appeared whenever a memory trace was observed behaviorally. Such memory-evoked potentials were characterized by early latencies and auditory topographies, consistent with a sensory origin. Furthermore, they were generated even on conditions of diverted attention. The EEG waveforms could be modeled as standard evoked responses to auditory events (N1-P2) [3], triggered by idiosyncratic perceptual features acquired through learning. Thus, we argue that the learning of noise is accompanied by the rapid formation of sharp neural selectivity to arbitrary and complex acoustic patterns, within sensory regions. Such a mechanism bridges the gap between the short-term and longer-term plasticity observed in the learning of noise [2, 4–6]. It could also be key to the processing of natural sounds within auditory cortices [7], suggesting that the neural code for sound source identification will be shaped by experience as well as by acoustics
    corecore