49 research outputs found

    Deep learning for EEG-based prognostication after cardiac arrest: from current research to future clinical applications.

    Get PDF
    Outcome prognostication in comatose patients after cardiac arrest (CA) remains to date a challenge. The major determinant of clinical outcome is the post-hypoxic/ischemic encephalopathy. Electroencephalography (EEG) is routinely used to assess neural functions in comatose patients. Currently, EEG-based outcome prognosis relies on visual evaluation by medical experts, which is time consuming, prone to subjectivity, and oblivious to complex patterns. The field of deep learning has given rise to powerful algorithms for detecting patterns in large amounts of data. Analyzing EEG signals of coma patients with deep neural networks with the goal of assisting in outcome prognosis is therefore a natural application of these algorithms. Here, we provide the first narrative literature review on the use of deep learning for prognostication after CA. Existing studies show overall high performance in predicting outcome, relying either on spontaneous or on auditory evoked EEG signals. Moreover, the literature is concerned with algorithmic interpretability, and has shown that largely, deep neural networks base their decisions on clinically or neurophysiologically meaningful features. We conclude this review by discussing considerations that the fields of artificial intelligence and neurology will need to jointly address in the future, in order for deep learning algorithms to break the publication barrier, and to be integrated in clinical practice

    A linear model for event-related respiration responses

    Get PDF
    AbstractBackgroundCognitive processes influence respiratory physiology. This may allow inferring cognitive states from measured respiration. Here, we take a first step towards this goal and investigate whether event-related respiratory responses can be identified, and whether they are accessible to a model-based approach.New methodWe regard respiratory responses as the output of a linear time invariant system that receives brief inputs after psychological events. We derive average responses to visual targets, aversive stimulation, and viewing of arousing pictures, in interpolated respiration period (RP), respiration amplitude (RA), and respiratory flow rate (RFR). We then base a Psychophysiological Model (PsPM) on these averaged event-related responses. The PsPM is inverted to yield estimates of cognitive input into the respiratory system. This method is validated in an independent data set.ResultsAll three measures show event-related responses, which are captured as non-zero response amplitudes in the PsPM. Amplitude estimates for RA and RFR distinguish between picture viewing and the other tasks. This pattern is replicated in the validation experiment.Comparison with existing methodsExisting respiratory measures are based on relatively short time-intervals after an event while the new method is based on the entire duration of respiratory responses.ConclusionOur findings suggest that interpolated respiratory measures show replicable event-related response patterns. PsPM inversion is a suitable approach to analysing these patterns, with a potential to infer cognitive processes from respiration

    Neural detection of complex sound sequences in the absence of consciousness

    Get PDF
    Neural responses to violations of global regularities are thought to require consciousness. However, Tzovara et al. show that some comatose patients can also detect deviations in sequences composed of repeated groups of sounds, suggesting that the unconscious brain has a greater capacity to track sensory inputs than previously believe

    Asymmetric representation of aversive prediction errors in Pavlovian threat conditioning

    Get PDF
    Learning to predict threat is important for survival. Such learning may be driven by differences between expected and encountered outcomes, termed prediction errors (PEs). While PEs are crucial for reward learning, the role of putative PE signals in aversive learning is less clear. Here, we used functional magnetic resonance imaging in humans to investigate neural PE signals. Four cues, each with a different probability of being followed by an aversive outcome, were presented multiple times. We found that neural activity only at omission - but not at occurrence - of predicted threat related to PEs in the medial prefrontal cortex. More expected omission was associated with higher neural activity. In no brain region did neural activity fulfill necessary computational criteria for full signed PE representation. Our result suggests that, different from reward learning, aversive learning may not be primarily driven by PE signals in one single brain region

    Deep Generative Models: The winning key for large and easily accessible ECG datasets?

    Get PDF
    Large high-quality datasets are essential for building powerful artificial intelligence (AI) algorithms capable of supporting advancement in cardiac clinical research. However, researchers working with electrocardiogram (ECG) signals struggle to get access and/or to build one. The aim of the present work is to shed light on a potential solution to address the lack of large and easily accessible ECG datasets. Firstly, the main causes of such a lack are identified and examined. Afterward, the potentials and limitations of cardiac data generation via deep generative models (DGMs) are deeply analyzed. These very promising algorithms have been found capable not only of generating large quantities of ECG signals but also of supporting data anonymization processes, to simplify data sharing while respecting patients' privacy. Their application could help research progress and cooperation in the name of open science. However several aspects, such as a standardized synthetic data quality evaluation and algorithm stability, need to be further explored

    Progression of auditory discrimination based on neural decoding predicts awakening from coma

    Get PDF
    Auditory evoked potentials are informative of intact cortical functions of comatose patients. The integrity of auditory functions evaluated using mismatch negativity paradigms has been associated with their chances of survival. However, because auditory discrimination is assessed at various delays after coma onset, it is still unclear whether this impairment depends on the time of the recording. We hypothesized that impairment in auditory discrimination capabilities is indicative of coma progression, rather than of the comatose state itself and that rudimentary auditory discrimination remains intact during acute stages of coma. We studied 30 post-anoxic comatose patients resuscitated from cardiac arrest and five healthy, age-matched controls. Using a mismatch negativity paradigm, we performed two electroencephalography recordings with a standard 19-channel clinical montage: the first within 24 h after coma onset and under mild therapeutic hypothermia, and the second after 1 day and under normothermic conditions. We analysed electroencephalography responses based on a multivariate decoding algorithm that automatically quantifies neural discrimination at the single patient level. Results showed high average decoding accuracy in discriminating sounds both for control subjects and comatose patients. Importantly, accurate decoding was largely independent of patients' chance of survival. However, the progression of auditory discrimination between the first and second recordings was informative of a patient's chance of survival. A deterioration of auditory discrimination was observed in all non-survivors (equivalent to 100% positive predictive value for survivors). We show, for the first time, evidence of intact auditory processing even in comatose patients who do not survive and that progression of sound discrimination over time is informative of a patient's chance of survival. Tracking auditory discrimination in comatose patients could provide new insight to the chance of awakening in a quantitative and automatic fashion during early stages of com

    Auditory stimulation and deep learning predict awakening from coma after cardiac arrest.

    Get PDF
    Assessing the integrity of neural functions in coma after cardiac arrest remains an open challenge. Prognostication of coma outcome relies mainly on visual expert scoring of physiological signals, which is prone to subjectivity and leaves a considerable number of patients in a 'grey zone', with uncertain prognosis. Quantitative analysis of EEG responses to auditory stimuli can provide a window into neural functions in coma and information about patients' chances of awakening. However, responses to standardized auditory stimulation are far from being used in a clinical routine due to heterogeneous and cumbersome protocols. Here, we hypothesize that convolutional neural networks can assist in extracting interpretable patterns of EEG responses to auditory stimuli during the first day of coma that are predictive of patients' chances of awakening and survival at 3 months. We used convolutional neural networks (CNNs) to model single-trial EEG responses to auditory stimuli in the first day of coma, under standardized sedation and targeted temperature management, in a multicentre and multiprotocol patient cohort and predict outcome at 3 months. The use of CNNs resulted in a positive predictive power for predicting awakening of 0.83 ± 0.04 and 0.81 ± 0.06 and an area under the curve in predicting outcome of 0.69 ± 0.05 and 0.70 ± 0.05, for patients undergoing therapeutic hypothermia and normothermia, respectively. These results also persisted in a subset of patients that were in a clinical 'grey zone'. The network's confidence in predicting outcome was based on interpretable features: it strongly correlated to the neural synchrony and complexity of EEG responses and was modulated by independent clinical evaluations, such as the EEG reactivity, background burst-suppression or motor responses. Our results highlight the strong potential of interpretable deep learning algorithms in combination with auditory stimulation to improve prognostication of coma outcome

    Intrinsic neural timescales in the temporal lobe support an auditory processing hierarchy

    Get PDF
    During rest, intrinsic neural dynamics manifest at multiple timescales, which progressively increase along visual and somatosensory hierarchies. Theoretically, intrinsic timescales are thought to facilitate processing of external stimuli at multiple stages. However, direct links between timescales at rest and sensory processing, as well as translation to the auditory system are lacking. Here, we measured intracranial electroencephalography in 11 human patients with epilepsy (4 women), while listening to pure tones. We show that in the auditory network, intrinsic neural timescales progressively increase, while the spectral exponent flattens, from temporal to entorhinal cortex, hippocampus, and amygdala. Within the neocortex, intrinsic timescales exhibit spatial gradients that follow the temporal lobe anatomy. Crucially, intrinsic timescales at baseline can explain the latency of auditory responses: as intrinsic timescales increase, so do the single-electrode response onset and peak latencies. Our results suggest that the human auditory network exhibits a repertoire of intrinsic neural dynamics, which manifest in cortical gradients with millimeter resolution and may provide a variety of temporal windows to support auditory processing.SIGNIFICANCE STATEMENT:Endogenous neural dynamics are often characterized by their intrinsic timescales. These are thought to facilitate processing of external stimuli. However, a direct link between intrinsic timing at rest and sensory processing is missing. Here, with intracranial electroencephalography (iEEG), we show that intrinsic timescales progressively increase from temporal to entorhinal cortex, hippocampus, and amygdala. Intrinsic timescales at baseline can explain the variability in the timing of iEEG responses to sounds: cortical electrodes with fast timescales also show fast and short-lasting responses to auditory stimuli, which progressively increase in the hippocampus and amygdala. Our results suggest that a hierarchy of neural dynamics in the temporal lobe manifests across cortical and limbic structures and can explain the temporal richness of auditory responses

    U-Sleep's resilience to AASM guidelines

    Full text link
    AASM guidelines are the result of decades of efforts aiming at standardizing sleep scoring procedure, with the final goal of sharing a worldwide common methodology. The guidelines cover several aspects from the technical/digital specifications,e.g., recommended EEG derivations, to detailed sleep scoring rules accordingly to age. Automated sleep scoring systems have always largely exploited the standards as fundamental guidelines. In this context, deep learning has demonstrated better performance compared to classical machine learning. Our present work shows that a deep learning based sleep scoring algorithm may not need to fully exploit the clinical knowledge or to strictly adhere to the AASM guidelines. Specifically, we demonstrate that U-Sleep, a state-of-the-art sleep scoring algorithm, can be strong enough to solve the scoring task even using clinically non-recommended or non-conventional derivations, and with no need to exploit information about the chronological age of the subjects. We finally strengthen a well-known finding that using data from multiple data centers always results in a better performing model compared with training on a single cohort. Indeed, we show that this latter statement is still valid even by increasing the size and the heterogeneity of the single data cohort. In all our experiments we used 28528 polysomnography studies from 13 different clinical studies
    corecore