30 research outputs found

    Development and validation of a multimodal neuroimaging biomarker for electroconvulsive therapy outcome in depression: A multicenter machine learning analysis

    Get PDF
    Background Electroconvulsive therapy (ECT) is the most effective intervention for patients with treatment resistant depression. A clinical decision support tool could guide patient selection to improve the overall response rate and avoid ineffective treatments with adverse effects. Initial small-scale, monocenter studies indicate that both structural magnetic resonance imaging (sMRI) and functional MRI (fMRI) biomarkers may predict ECT outcome, but it is not known whether those results can generalize to data from other centers. The objective of this study was to develop and validate neuroimaging biomarkers for ECT outcome in a multicenter setting. Methods Multimodal data (i.e. clinical, sMRI and resting-state fMRI) were collected from seven centers of the Global ECT-MRI Research Collaboration (GEMRIC). We used data from 189 depressed patients to evaluate which data modalities or combinations thereof could provide the best predictions for treatment remission (HAM-D score ⩽7) using a support vector machine classifier. Results Remission classification using a combination of gray matter volume and functional connectivity led to good performing models with average 0.82–0.83 area under the curve (AUC) when trained and tested on samples coming from the three largest centers (N = 109), and remained acceptable when validated using leave-one-site-out cross-validation (0.70–0.73 AUC). Conclusions These results show that multimodal neuroimaging data can be used to predict remission with ECT for individual patients across different treatment centers, despite significant variability in clinical characteristics across centers. Future development of a clinical decision support tool applying these biomarkers may be feasible.publishedVersio

    Longitudinal predictive modeling of tau progression along the structural connectome

    No full text
    Tau neurofibrillary tangles, a pathophysiological hallmark of Alzheimer’s disease (AD), exhibit a stereotypical spatiotemporal trajectory that is strongly correlated with disease progression and cognitive decline. Personalized prediction of tau progression is, therefore, vital for the early diagnosis and prognosis of AD. Evidence from both animal and human studies is suggestive of tau transmission along the brains preexisting neural connectivity conduits. We present here an analytic graph diffusion framework for individualized predictive modeling of tau progression along the structural connectome. To account for physiological processes that lead to active generation and clearance of tau alongside passive diffusion, our model uses an inhomogenous graph diffusion equation with a source term and provides closed-form solutions to this equation for linear and exponential source functionals. Longitudinal imaging data from two cohorts, the Harvard Aging Brain Study (HABS) and the Alzheimer’s Disease Neuroimaging Initiative (ADNI), were used to validate the model. The clinical data used for developing and validating the model include regional tau measures extracted from longitudinal positron emission tomography (PET) scans based on the 18F-Flortaucipir radiotracer and individual structural connectivity maps computed from diffusion tensor imaging (DTI) by means of tractography and streamline counting. Two-timepoint tau PET scans were used to assess the goodness of model fit. Three-timepoint tau PET scans were used to assess predictive accuracy via comparison of predicted and observed tau measures at the third timepoint. Our results show high consistency between predicted and observed tau and differential tau from region-based analysis. While the prognostic value of this approach needs to be validated in a larger cohort, our preliminary results suggest that our longitudinal predictive model, which offers an in vivo macroscopic perspective on tau progression in the brain, is potentially promising as a personalizable predictive framework for AD

    AI-Driven sleep staging from actigraphy and heart rate.

    No full text
    Sleep is an important indicator of a person's health, and its accurate and cost-effective quantification is of great value in healthcare. The gold standard for sleep assessment and the clinical diagnosis of sleep disorders is polysomnography (PSG). However, PSG requires an overnight clinic visit and trained technicians to score the obtained multimodality data. Wrist-worn consumer devices, such as smartwatches, are a promising alternative to PSG because of their small form factor, continuous monitoring capability, and popularity. Unlike PSG, however, wearables-derived data are noisier and far less information-rich because of the fewer number of modalities and less accurate measurements due to their small form factor. Given these challenges, most consumer devices perform two-stage (i.e., sleep-wake) classification, which is inadequate for deep insights into a person's sleep health. The challenging multi-class (three, four, or five-class) staging of sleep using data from wrist-worn wearables remains unresolved. The difference in the data quality between consumer-grade wearables and lab-grade clinical equipment is the motivation behind this study. In this paper, we present an artificial intelligence (AI) technique termed sequence-to-sequence LSTM for automated mobile sleep staging (SLAMSS), which can perform three-class (wake, NREM, REM) and four-class (wake, light, deep, REM) sleep classification from activity (i.e., wrist-accelerometry-derived locomotion) and two coarse heart rate measures-both of which can be reliably obtained from a consumer-grade wrist-wearable device. Our method relies on raw time-series datasets and obviates the need for manual feature selection. We validated our model using actigraphy and coarse heart rate data from two independent study populations: the Multi-Ethnic Study of Atherosclerosis (MESA; N = 808) cohort and the Osteoporotic Fractures in Men (MrOS; N = 817) cohort. SLAMSS achieves an overall accuracy of 79%, weighted F1 score of 0.80, 77% sensitivity, and 89% specificity for three-class sleep staging and an overall accuracy of 70-72%, weighted F1 score of 0.72-0.73, 64-66% sensitivity, and 89-90% specificity for four-class sleep staging in the MESA cohort. It yielded an overall accuracy of 77%, weighted F1 score of 0.77, 74% sensitivity, and 88% specificity for three-class sleep staging and an overall accuracy of 68-69%, weighted F1 score of 0.68-0.69, 60-63% sensitivity, and 88-89% specificity for four-class sleep staging in the MrOS cohort. These results were achieved with feature-poor inputs with a low temporal resolution. In addition, we extended our three-class staging model to an unrelated Apple Watch dataset. Importantly, SLAMSS predicts the duration of each sleep stage with high accuracy. This is especially significant for four-class sleep staging, where deep sleep is severely underrepresented. We show that, by appropriately choosing the loss function to address the inherent class imbalance, our method can accurately estimate deep sleep time (SLAMSS/MESA: 0.61±0.69 hours, PSG/MESA ground truth: 0.60±0.60 hours; SLAMSS/MrOS: 0.53±0.66 hours, PSG/MrOS ground truth: 0.55±0.57 hours;). Deep sleep quality and quantity are vital metrics and early indicators for a number of diseases. Our method, which enables accurate deep sleep estimation from wearables-derived data, is therefore promising for a variety of clinical applications requiring long-term deep sleep monitoring

    MrOS three-class sleep staging.

    No full text
    Comparison of MAE for clinical sleep metrics for four classifiers against PSG: SLAMSS with activity, HRM, and HRSD inputs (SLAMSS-Act-HR), LSTM with activity, HRM, and HRSD inputs (LSTM-Act-HR), SLAMSS with HRM and HRSD inputs (SLAMSS-HR), and SLAMSS with an activity input (SLAMSS-Act). MAE values are provided in the format: mean (s.d.).</p

    MrOS four-class sleep staging.

    No full text
    Comparison of clinical sleep metrics for four-class sleep staging using SLAMSS with an inverse-frequency-weighted cross-entropy loss function (SLAMSS-IF) and SLAMSS with a real-world-weighted cross-entropy loss function (SLAMSS-RW). The orange dotted line corresponds to the PSG (assumed ground truth) value of each metric.</p

    MrOS three-class sleep staging.

    No full text
    Comparison of clinical sleep metrics for four classifiers: SLAMSS with activity, HRM, and HRSD inputs (SLAMSS-Act-HR), LSTM with activity, HRM, and HRSD inputs (LSTM-Act-HR), SLAMSS with HRM and HRSD inputs (SLAMSS-HR), and SLAMSS with an activity input (SLAMSS-Act). The orange dotted line corresponds to the PSG (assumed ground truth) value of each metric.</p

    MrOS four-class sleep staging.

    No full text
    Confusion matrices for four-class sleep staging using SLAMSS with an inverse-frequency-weighted cross-entropy loss function (SLAMSS-IF) and SLAMSS with a real-world-weighted cross-entropy loss function (SLAMSS-RW). It should be noted that, for four-class staging, category assignment by random chance would lead to a value of 25% for the diagonal elements of these matrices.</p

    Confusion matrices for three-class sleep staging using SLAMSS (based on a standard IF-weighted cross-entropy loss function) with activity, HRM, and HRSD inputs (SLAMSS-Act-HR), and SLAMSS with activity, HRM, HRSD, and raw clock time inputs (SLAMSS-Act-HR-Clock) with PSG being used as the ground truth.

    No full text
    Confusion matrices for three-class sleep staging using SLAMSS (based on a standard IF-weighted cross-entropy loss function) with activity, HRM, and HRSD inputs (SLAMSS-Act-HR), and SLAMSS with activity, HRM, HRSD, and raw clock time inputs (SLAMSS-Act-HR-Clock) with PSG being used as the ground truth.</p
    corecore