1,778 research outputs found
Fast Predictive Simple Geodesic Regression
Deformable image registration and regression are important tasks in medical
image analysis. However, they are computationally expensive, especially when
analyzing large-scale datasets that contain thousands of images. Hence, cluster
computing is typically used, making the approaches dependent on such
computational infrastructure. Even larger computational resources are required
as study sizes increase. This limits the use of deformable image registration
and regression for clinical applications and as component algorithms for other
image analysis approaches. We therefore propose using a fast predictive
approach to perform image registrations. In particular, we employ these fast
registration predictions to approximate a simplified geodesic regression model
to capture longitudinal brain changes. The resulting method is orders of
magnitude faster than the standard optimization-based regression model and
hence facilitates large-scale analysis on a single graphics processing unit
(GPU). We evaluate our results on 3D brain magnetic resonance images (MRI) from
the ADNI datasets.Comment: 19 pages, 10 figures, 13 table
Multi-Channel Stochastic Variational Inference for the Joint Analysis of Heterogeneous Biomedical Data in Alzheimer's Disease
The joint analysis of biomedical data in Alzheimer's Disease (AD) is
important for better clinical diagnosis and to understand the relationship
between biomarkers. However, jointly accounting for heterogeneous measures
poses important challenges related to the modeling of the variability and the
interpretability of the results. These issues are here addressed by proposing a
novel multi-channel stochastic generative model. We assume that a latent
variable generates the data observed through different channels (e.g., clinical
scores, imaging, ...) and describe an efficient way to estimate jointly the
distribution of both latent variable and data generative process. Experiments
on synthetic data show that the multi-channel formulation allows superior data
reconstruction as opposed to the single channel one. Moreover, the derived
lower bound of the model evidence represents a promising model selection
criterion. Experiments on AD data show that the model parameters can be used
for unsupervised patient stratification and for the joint interpretation of the
heterogeneous observations. Because of its general and flexible formulation, we
believe that the proposed method can find important applications as a general
data fusion technique.Comment: accepted for presentation at MLCN 2018 workshop, in Conjunction with
MICCAI 2018, September 20, Granada, Spai
CSF and Brain Structural Imaging Markers of the Alzheimer's Pathological Cascade
10.1371/journal.pone.0047406PLoS ONE712
(Extra)Ordinary Gauge/Anomaly Mediation
We study anomaly mediation models with gauge mediation effects from
messengers which have a general renormalizable mass matrix with a
supersymmetry-breaking spurion. Our models lead to a rich structure of
supersymmetry breaking terms in the visible sector. We derive sum rules among
the soft scalar masses for each generation. Our sum rules for the first and
second generations are the same as those in general gauge mediation, but the
sum rule for the third generation is different because of the top Yukawa
coupling. We find the parameter space where the tachyonic slepton problem is
solved. We also explore the case in which gauge mediation causes the
anomalously small gaugino masses. Since anomaly mediation effects on the
gaugino masses exist, we can obtain viable mass spectrum of the visible sector
fields.Comment: 24 pages, 10 figure
Combining Anomaly and Z' Mediation of Supersymmetry Breaking
We propose a scenario in which the supersymmetry breaking effect mediated by
an additional U(1)' is comparable with that of anomaly mediation. We argue that
such a scenario can be naturally realized in a large class of models. Combining
anomaly with Z' mediation allows us to solve the tachyonic slepton problem of
the former and avoid significant fine tuning in the latter. We focus on an
NMSSM-like scenario where U(1)' gauge invariance is used to forbid a tree-level
mu term, and present concrete models, which admit successful dynamical
electroweak symmetry breaking. Gaugino masses are somewhat lighter than the
scalar masses, and the third generation squarks are lighter than the first two.
In the specific class of models under consideration, the gluino is light since
it only receives a contribution from 2-loop anomaly mediation, and it decays
dominantly into third generation quarks. Gluino production leads to distinct
LHC signals and prospects of early discovery. In addition, there is a
relatively light Z', with mass in the range of several TeV. Discovering and
studying its properties can reveal important clues about the underlying model.Comment: Minor changes: references added, typos corrected, journal versio
Generation and quality control of lipidomics data for the alzheimers disease neuroimaging initiative cohort.
Alzheimers disease (AD) is a major public health priority with a large socioeconomic burden and complex etiology. The Alzheimer Disease Metabolomics Consortium (ADMC) and the Alzheimer Disease Neuroimaging Initiative (ADNI) aim to gain new biological insights in the disease etiology. We report here an untargeted lipidomics of serum specimens of 806 subjects within the ADNI1 cohort (188 AD, 392 mild cognitive impairment and 226 cognitively normal subjects) along with 83 quality control samples. Lipids were detected and measured using an ultra-high-performance liquid chromatography quadruple/time-of-flight mass spectrometry (UHPLC-QTOF MS) instrument operated in both negative and positive electrospray ionization modes. The dataset includes a total 513 unique lipid species out of which 341 are known lipids. For over 95% of the detected lipids, a relative standard deviation of better than 20% was achieved in the quality control samples, indicating high technical reproducibility. Association modeling of this dataset and available clinical, metabolomics and drug-use data will provide novel insights into the AD etiology. These datasets are available at the ADNI repository at http://adni.loni.usc.edu/
Contrasting prefrontal cortex contributions to episodic memory dysfunction in behavioural variant frontotemporal dementia and alzheimer's disease
Recent evidence has questioned the integrity of episodic memory in behavioural variant frontotemporal dementia (bvFTD), where recall performance is impaired to the same extent as in Alzheimer's disease (AD). While these deficits appear to be mediated by divergent patterns of brain atrophy, there is evidence to suggest that certain prefrontal regions are implicated across both patient groups. In this study we sought to further elucidate the dorsolateral (DLPFC) and ventromedial (VMPFC) prefrontal contributions to episodic memory impairment in bvFTD and AD. Performance on episodic memory tasks and neuropsychological measures typically tapping into either DLPFC or VMPFC functions was assessed in 22 bvFTD, 32 AD patients and 35 age- and education-matched controls. Behaviourally, patient groups did not differ on measures of episodic memory recall or DLPFC-mediated executive functions. BvFTD patients were significantly more impaired on measures of VMPFC-mediated executive functions. Composite measures of the recall, DLPFC and VMPFC task scores were covaried against the T1 MRI scans of all participants to identify regions of atrophy correlating with performance on these tasks. Imaging analysis showed that impaired recall performance is associated with divergent patterns of PFC atrophy in bvFTD and AD. Whereas in bvFTD, PFC atrophy covariates for recall encompassed both DLPFC and VMPFC regions, only the DLPFC was implicated in AD. Our results suggest that episodic memory deficits in bvFTD and AD are underpinned by divergent prefrontal mechanisms. Moreover, we argue that these differences are not adequately captured by existing neuropsychological measures
Diagnostic and economic evaluation of new biomarkers for Alzheimer's disease: the research protocol of a prospective cohort study
Doc number: 72 Abstract Background: New research criteria for the diagnosis of Alzheimer's disease (AD) have recently been developed to enable an early diagnosis of AD pathophysiology by relying on emerging biomarkers. To enable efficient allocation of health care resources, evidence is needed to support decision makers on the adoption of emerging biomarkers in clinical practice. The research goals are to 1) assess the diagnostic test accuracy of current clinical diagnostic work-up and emerging biomarkers in MRI, PET and CSF, 2) perform a cost-consequence analysis and 3) assess long-term cost-effectiveness by an economic model. Methods/design: In a cohort design 241 consecutive patients suspected of having a primary neurodegenerative disease are approached in four academic memory clinics and followed for two years. Clinical data and data on quality of life, costs and emerging biomarkers are gathered. Diagnostic test accuracy is determined by relating the clinical practice and new research criteria diagnoses to a reference diagnosis. The clinical practice diagnosis at baseline is reflected by a consensus procedure among experts using clinical information only (no biomarkers). The diagnosis based on the new research criteria is reflected by decision rules that combine clinical and biomarker information. The reference diagnosis is determined by a consensus procedure among experts based on clinical information on the course of symptoms over a two-year time period. A decision analytic model is built combining available evidence from different resources among which (accuracy) results from the study, literature and expert opinion to assess long-term cost-effectiveness of the emerging biomarkers. Discussion: Several other multi-centre trials study the relative value of new biomarkers for early evaluation of AD and related disorders. The uniqueness of this study is the assessment of resource utilization and quality of life to enable an economic evaluation. The study results are generalizable to a population of patients who are referred to a memory clinic due to their memory problems. Trial registration: NCT0145089
Predicting progression of mild cognitive impairment to dementia using neuropsychological data: a supervised learning approach using time windows
Background: Predicting progression from a stage of Mild Cognitive Impairment to dementia is a major pursuit in current research. It is broadly accepted that cognition declines with a continuum between MCI and dementia. As such, cohorts of MCI patients are usually heterogeneous, containing patients at different stages of the neurodegenerative process. This hampers the prognostic task. Nevertheless, when learning prognostic models, most studies use the entire cohort of MCI patients regardless of their disease stages. In this paper, we propose a Time Windows approach to predict conversion to dementia, learning with patients stratified using time windows, thus fine-tuning the prognosis regarding the time to conversion. Methods: In the proposed Time Windows approach, we grouped patients based on the clinical information of whether they converted (converter MCI) or remained MCI (stable MCI) within a specific time window. We tested time windows of 2, 3, 4 and 5 years. We developed a prognostic model for each time window using clinical and neuropsychological data and compared this approach with the commonly used in the literature, where all patients are used to learn the models, named as First Last approach. This enables to move from the traditional question "Will a MCI patient convert to dementia somewhere in the future" to the question "Will a MCI patient convert to dementia in a specific time window". Results: The proposed Time Windows approach outperformed the First Last approach. The results showed that we can predict conversion to dementia as early as 5 years before the event with an AUC of 0.88 in the cross-validation set and 0.76 in an independent validation set. Conclusions: Prognostic models using time windows have higher performance when predicting progression from MCI to dementia, when compared to the prognostic approach commonly used in the literature. Furthermore, the proposed Time Windows approach is more relevant from a clinical point of view, predicting conversion within a temporal interval rather than sometime in the future and allowing clinicians to timely adjust treatments and clinical appointments.FCT under the Neuroclinomics2 project [PTDC/EEI-SII/1937/2014, SFRH/BD/95846/2013]; INESC-ID plurianual [UID/CEC/50021/2013]; LASIGE Research Unit [UID/CEC/00408/2013
Robust automated detection of microstructural white matter degeneration in Alzheimer’s disease using machine learning classification of multicenter DTI data
Diffusion tensor imaging (DTI) based assessment of white matter fiber tract integrity can support the diagnosis of Alzheimer’s disease (AD). The use of DTI as a biomarker, however, depends on its applicability in a multicenter setting accounting for effects of different MRI scanners. We applied multivariate machine learning (ML) to a large multicenter sample from the recently created framework of the European DTI study on Dementia (EDSD). We hypothesized that ML approaches may amend effects of multicenter acquisition. We included a sample of 137 patients with clinically probable AD (MMSE 20.6±5.3) and 143 healthy elderly controls, scanned in nine different scanners. For diagnostic classification we used the DTI indices fractional anisotropy (FA) and mean diffusivity (MD) and, for comparison, gray matter and white matter density maps from anatomical MRI. Data were classified using a Support Vector Machine (SVM) and a Naïve Bayes (NB) classifier. We used two cross-validation approaches, (i) test and training samples randomly drawn from the entire data set (pooled cross-validation) and (ii) data from each scanner as test set, and the data from the remaining scanners as training set (scanner-specific cross-validation). In the pooled cross-validation, SVM achieved an accuracy of 80% for FA and 83% for MD. Accuracies for NB were significantly lower, ranging between 68% and 75%. Removing variance components arising from scanners using principal component analysis did not significantly change the classification results for both classifiers. For the scanner-specific cross-validation, the classification accuracy was reduced for both SVM and NB. After mean correction, classification accuracy reached a level comparable to the results obtained from the pooled cross-validation. Our findings support the notion that machine learning classification allows robust classification of DTI data sets arising from multiple scanners, even if a new data set comes from a scanner that was not part of the training sample
- …
