6 research outputs found

    Auditory stimulation and deep learning predict awakening from coma after cardiac arrest.

    Get PDF
    Assessing the integrity of neural functions in coma after cardiac arrest remains an open challenge. Prognostication of coma outcome relies mainly on visual expert scoring of physiological signals, which is prone to subjectivity and leaves a considerable number of patients in a 'grey zone', with uncertain prognosis. Quantitative analysis of EEG responses to auditory stimuli can provide a window into neural functions in coma and information about patients' chances of awakening. However, responses to standardized auditory stimulation are far from being used in a clinical routine due to heterogeneous and cumbersome protocols. Here, we hypothesize that convolutional neural networks can assist in extracting interpretable patterns of EEG responses to auditory stimuli during the first day of coma that are predictive of patients' chances of awakening and survival at 3 months. We used convolutional neural networks (CNNs) to model single-trial EEG responses to auditory stimuli in the first day of coma, under standardized sedation and targeted temperature management, in a multicentre and multiprotocol patient cohort and predict outcome at 3 months. The use of CNNs resulted in a positive predictive power for predicting awakening of 0.83 ± 0.04 and 0.81 ± 0.06 and an area under the curve in predicting outcome of 0.69 ± 0.05 and 0.70 ± 0.05, for patients undergoing therapeutic hypothermia and normothermia, respectively. These results also persisted in a subset of patients that were in a clinical 'grey zone'. The network's confidence in predicting outcome was based on interpretable features: it strongly correlated to the neural synchrony and complexity of EEG responses and was modulated by independent clinical evaluations, such as the EEG reactivity, background burst-suppression or motor responses. Our results highlight the strong potential of interpretable deep learning algorithms in combination with auditory stimulation to improve prognostication of coma outcome

    Intrinsic neural timescales in the temporal lobe support an auditory processing hierarchy

    Get PDF
    During rest, intrinsic neural dynamics manifest at multiple timescales, which progressively increase along visual and somatosensory hierarchies. Theoretically, intrinsic timescales are thought to facilitate processing of external stimuli at multiple stages. However, direct links between timescales at rest and sensory processing, as well as translation to the auditory system are lacking. Here, we measured intracranial electroencephalography in 11 human patients with epilepsy (4 women), while listening to pure tones. We show that in the auditory network, intrinsic neural timescales progressively increase, while the spectral exponent flattens, from temporal to entorhinal cortex, hippocampus, and amygdala. Within the neocortex, intrinsic timescales exhibit spatial gradients that follow the temporal lobe anatomy. Crucially, intrinsic timescales at baseline can explain the latency of auditory responses: as intrinsic timescales increase, so do the single-electrode response onset and peak latencies. Our results suggest that the human auditory network exhibits a repertoire of intrinsic neural dynamics, which manifest in cortical gradients with millimeter resolution and may provide a variety of temporal windows to support auditory processing.SIGNIFICANCE STATEMENT:Endogenous neural dynamics are often characterized by their intrinsic timescales. These are thought to facilitate processing of external stimuli. However, a direct link between intrinsic timing at rest and sensory processing is missing. Here, with intracranial electroencephalography (iEEG), we show that intrinsic timescales progressively increase from temporal to entorhinal cortex, hippocampus, and amygdala. Intrinsic timescales at baseline can explain the variability in the timing of iEEG responses to sounds: cortical electrodes with fast timescales also show fast and short-lasting responses to auditory stimuli, which progressively increase in the hippocampus and amygdala. Our results suggest that a hierarchy of neural dynamics in the temporal lobe manifests across cortical and limbic structures and can explain the temporal richness of auditory responses

    Complementary roles of neural synchrony and complexity for indexing consciousness and chances of surviving in acute coma.

    Get PDF
    An open challenge in consciousness research is understanding how neural functions are altered by pathological loss of consciousness. To maintain consciousness, the brain needs synchronized communication of information across brain regions, and sufficient complexity in neural activity. Coordination of brain activity, typically indexed through measures of neural synchrony, has been shown to decrease when consciousness is lost and to reflect the clinical state of patients with disorders of consciousness. Moreover, when consciousness is lost, neural activity loses complexity, while the levels of neural noise, indexed by the slope of the electroencephalography (EEG) spectral exponent decrease. Although these properties have been well investigated in resting state activity, it remains unknown whether the sensory processing network, which has been shown to be preserved in coma, suffers from a loss of synchronization or information content. Here, we focused on acute coma and hypothesized that neural synchrony in response to auditory stimuli would reflect coma severity, while complexity, or neural noise, would reflect the presence or loss of consciousness. Results showed that neural synchrony of EEG signals was stronger for survivors than non-survivors and predictive of patients' outcome, but indistinguishable between survivors and healthy controls. Measures of neural complexity and neural noise were not informative of patients' outcome and had high or low values for patients compared to controls. Our results suggest different roles for neural synchrony and complexity in acute coma. Synchrony represents a precondition for consciousness, while complexity needs an equilibrium between high or low values to support conscious cognition

    Sleep research in the era of AI

    Get PDF
    The field of sleep research is both broad and rapidly evolving. It spans from the diagnosis of sleep-related disorders to investigations of how sleep supports memory consolidation. The study of sleep includes a variety of approaches, starting with the sole focus on the visual interpretation of polysomnography characteristics and extending to the emergent use of advanced signal processing tools. Insights gained using artificial intelligence (AI) are rapidly reshaping the understanding of sleep-related disorders, enabling new approaches to basic neuroscientific studies. In this opinion article, we explore the emergent role of AI in sleep research, along two different axes: one clinical and one fundamental. In clinical research, we emphasize the use of AI for automated sleep scoring, diagnosing sleep-wake disorders and assessing measurements from wearable devices. In fundamental research, we highlight the use of AI to better understand the functional role of sleep in consolidating memories. While AI is likely to facilitate new advances in the field of sleep research, we also address challenges, such as bridging the gap between AI innovation and the clinic and mitigating inherent biases in AI models. AI has already contributed to major advances in the field of sleep research, and mindful deployment has the potential to enable further progress in the understanding of the neuropsychological benefits and functions of sleep

    Neural complexity and the spectral slope characterise auditory processing in wakefulness and sleep.

    No full text
    Auditory processing and the complexity of neural activity can both indicate residual consciousness levels and differentiate states of arousal. However, how measures of neural signal complexity manifest in neural activity following environmental stimulation and, more generally, how the electrophysiological characteristics of auditory responses change in states of reduced consciousness remain under-explored. Here, we tested the hypothesis that measures of neural complexity and the spectral slope would discriminate stages of sleep and wakefulness not only in baseline electroencephalography (EEG) activity but also in EEG signals following auditory stimulation. High-density EEG was recorded in 21 participants to determine the spatial relationship between these measures and between EEG recorded pre- and post-auditory stimulation. Results showed that the complexity and the spectral slope in the 2-20 Hz range discriminated between sleep stages and had a high correlation in sleep. In wakefulness, complexity was strongly correlated to the 20-40 Hz spectral slope. Auditory stimulation resulted in reduced complexity in sleep compared to the pre-stimulation EEG activity and modulated the spectral slope in wakefulness. These findings confirm our hypothesis that electrophysiological markers of arousal are sensitive to sleep/wake states in EEG activity during baseline and following auditory stimulation. Our results have direct applications to studies using auditory stimulation to probe neural functions in states of reduced consciousness

    Feasibility, Safety, and Performance of Full-Head Subscalp EEG Using Minimally Invasive Electrode Implantation.

    No full text
    BACKGROUND AND OBJECTIVES Current practice in clinical neurophysiology is limited to short recordings with conventional EEG (days) that fail to capture a range of brain (dys)functions at longer timescales (months). The future ability to optimally manage chronic brain disorders, such as epilepsy, hinges upon finding methods to monitor electrical brain activity in daily life. We developed a device for full-head subscalp EEG (Epios) and tested here the feasibility to safely insert the electrode leads beneath the scalp by a minimally invasive technique (primary outcome). As secondary outcome, we verified the noninferiority of subscalp EEG in measuring physiologic brain oscillations and pathologic discharges compared with scalp EEG, the established standard of care. METHODS Eight participants with pharmacoresistant epilepsy undergoing intracranial EEG received in the same surgery subscalp electrodes tunneled between the scalp and the skull with custom-made tools. Postoperative safety was monitored on an inpatient ward for up to 9 days. Sleep-wake, ictal, and interictal EEG signals from subscalp, scalp, and intracranial electrodes were compared quantitatively using windowed multitaper transforms and spectral coherence. Noninferiority was tested for pairs of neighboring subscalp and scalp electrodes with a Bland-Altman analysis for measurement bias and calculation of the interclass correlation coefficient (ICC). RESULTS As primary outcome, up to 28 subscalp electrodes could be safely placed over the entire head through 1-cm scalp incisions in a ∌1-hour procedure. Five of 10 observed perioperative adverse events were linked to the investigational procedure, but none were serious, and all resolved. As a secondary outcome, subscalp electrodes advantageously recorded EEG percutaneously without requiring any maintenance and were noninferior to scalp electrodes for measuring (1) variably strong, stage-specific brain oscillations (alpha in wake, delta, sigma, and beta in sleep) and (2) interictal spikes peak-potentials and ictal signals coherent with seizure propagation in different brain regions (ICC >0.8 and absence of bias). DISCUSSION Recording full-head subscalp EEG for localization and monitoring purposes is feasible up to 9 days in humans using minimally invasive techniques and noninferior to the current standard of care. A longer prospective ambulatory study of the full system will be necessary to establish the safety and utility of this innovative approach. TRIAL REGISTRATION INFORMATION clinicaltrials.gov/study/NCT04796597
    corecore