7,151 research outputs found
An efficient multi-core implementation of a novel HSS-structured multifrontal solver using randomized sampling
We present a sparse linear system solver that is based on a multifrontal
variant of Gaussian elimination, and exploits low-rank approximation of the
resulting dense frontal matrices. We use hierarchically semiseparable (HSS)
matrices, which have low-rank off-diagonal blocks, to approximate the frontal
matrices. For HSS matrix construction, a randomized sampling algorithm is used
together with interpolative decompositions. The combination of the randomized
compression with a fast ULV HSS factorization leads to a solver with lower
computational complexity than the standard multifrontal method for many
applications, resulting in speedups up to 7 fold for problems in our test
suite. The implementation targets many-core systems by using task parallelism
with dynamic runtime scheduling. Numerical experiments show performance
improvements over state-of-the-art sparse direct solvers. The implementation
achieves high performance and good scalability on a range of modern shared
memory parallel systems, including the Intel Xeon Phi (MIC). The code is part
of a software package called STRUMPACK -- STRUctured Matrices PACKage, which
also has a distributed memory component for dense rank-structured matrices
Recommended from our members
Bayesian latent time joint mixed-effects model of progression in the Alzheimer's Disease Neuroimaging Initiative.
IntroductionWe characterize long-term disease dynamics from cognitively healthy to dementia using data from the Alzheimer's Disease Neuroimaging Initiative.MethodsWe apply a latent time joint mixed-effects model to 16 cognitive, functional, biomarker, and imaging outcomes in Alzheimer's Disease Neuroimaging Initiative. Markov chain Monte Carlo methods are used for estimation and inference.ResultsWe find good concordance between latent time and diagnosis. Change in amyloid positron emission tomography shows a moderate correlation with change in cerebrospinal fluid tau (ρ = 0.310) and phosphorylated tau (ρ = 0.294) and weaker correlation with amyloid-β 42 (ρ = 0.176). In comparison to amyloid positron emission tomography, change in volumetric magnetic resonance imaging summaries is more strongly correlated with cognitive measures (e.g., ρ = 0.731 for ventricles and Alzheimer's Disease Assessment Scale). The average disease trends are consistent with the amyloid cascade hypothesis.DiscussionThe latent time joint mixed-effects model can (1) uncover long-term disease trends; (2) estimate the sequence of pathological abnormalities; and (3) provide subject-specific prognostic estimates of the time until onset of symptoms
The relative efficiency of time-to-progression and continuous measures of cognition in presymptomatic Alzheimer's disease.
IntroductionClinical trials on preclinical Alzheimer's disease are challenging because of the slow rate of disease progression. We use a simulation study to demonstrate that models of repeated cognitive assessments detect treatment effects more efficiently than models of time to progression.MethodsMultivariate continuous data are simulated from a Bayesian joint mixed-effects model fit to data from the Alzheimer's Disease Neuroimaging Initiative. Simulated progression events are algorithmically derived from the continuous assessments using a random forest model fit to the same data.ResultsWe find that power is approximately doubled with models of repeated continuous outcomes compared with the time-to-progression analysis. The simulations also demonstrate that a plausible informative missing data pattern can induce a bias that inflates treatment effects, yet 5% type I error is maintained.DiscussionGiven the relative inefficiency of time to progression, it should be avoided as a primary analysis approach in clinical trials of preclinical Alzheimer's disease
Recommended from our members
Predicting the course of Alzheimer's progression.
Alzheimer's disease is the most common neurodegenerative disease and is characterized by the accumulation of amyloid-beta peptides leading to the formation of plaques and tau protein tangles in brain. These neuropathological features precede cognitive impairment and Alzheimer's dementia by many years. To better understand and predict the course of disease from early-stage asymptomatic to late-stage dementia, it is critical to study the patterns of progression of multiple markers. In particular, we aim to predict the likely future course of progression for individuals given only a single observation of their markers. Improved individual-level prediction may lead to improved clinical care and clinical trials. We propose a two-stage approach to modeling and predicting measures of cognition, function, brain imaging, fluid biomarkers, and diagnosis of individuals using multiple domains simultaneously. In the first stage, joint (or multivariate) mixed-effects models are used to simultaneously model multiple markers over time. In the second stage, random forests are used to predict categorical diagnoses (cognitively normal, mild cognitive impairment, or dementia) from predictions of continuous markers based on the first-stage model. The combination of the two models allows one to leverage their key strengths in order to obtain improved accuracy. We characterize the predictive accuracy of this two-stage approach using data from the Alzheimer's Disease Neuroimaging Initiative. The two-stage approach using a single joint mixed-effects model for all continuous outcomes yields better diagnostic classification accuracy compared to using separate univariate mixed-effects models for each of the continuous outcomes. Overall prediction accuracy above 80% was achieved over a period of 2.5 years. The results further indicate that overall accuracy is improved when markers from multiple assessment domains, such as cognition, function, and brain imaging, are used in the prediction algorithm as compared to the use of markers from a single domain only
Numerical Predictions of Three-Dimensional Velocity Field and Bed Shear Stress around Bridge Piers
Source: ICHE Conference Archive - https://mdi-de.baw.de/icheArchiv
Smooth tail index estimation
Both parametric distribution functions appearing in extreme value theory -
the generalized extreme value distribution and the generalized Pareto
distribution - have log-concave densities if the extreme value index gamma is
in [-1,0]. Replacing the order statistics in tail index estimators by their
corresponding quantiles from the distribution function that is based on the
estimated log-concave density leads to novel smooth quantile and tail index
estimators. These new estimators aim at estimating the tail index especially in
small samples. Acting as a smoother of the empirical distribution function, the
log-concave distribution function estimator reduces estimation variability to a
much greater extent than it introduces bias. As a consequence, Monte Carlo
simulations demonstrate that the smoothed version of the estimators are well
superior to their non-smoothed counterparts, in terms of mean squared error.Comment: 17 pages, 5 figures. Slightly changed Pickand's estimator, added some
more introduction and discussio
Serum Non-high-density lipoprotein cholesterol concentration and risk of death from cardiovascular diseases among U.S. adults with diagnosed diabetes: the Third National Health and Nutrition Examination Survey linked mortality study
<p>Abstract</p> <p>Background</p> <p>Non-high-density lipoprotein cholesterol (non-HDL-C) measures all atherogenic apolipoprotein B-containing lipoproteins and predicts risk of cardiovascular diseases (CVD). The association of non-HDL-C with risk of death from CVD in diabetes is not well understood. This study assessed the hypothesis that, among adults with diabetes, non-HDL-C may be related to the risk of death from CVD.</p> <p>Methods</p> <p>We analyzed data from 1,122 adults aged 20 years and older with diagnosed diabetes who participated in the Third National Health and Nutrition Examination Survey linked mortality study (299 deaths from CVD according to underlying cause of death; median follow-up length, 12.4 years).</p> <p>Results</p> <p>Compared to participants with serum non-HDL-C concentrations of 35 to 129 mg/dL, those with higher serum levels had a higher risk of death from total CVD: the RRs were 1.34 (95% CI: 0.75-2.39) and 2.25 (95% CI: 1.30-3.91) for non-HDL-C concentrations of 130-189 mg/dL and 190-403 mg/dL, respectively (<it>P </it>= 0.003 for linear trend) after adjustment for demographic characteristics and selected risk factors. In subgroup analyses, significant linear trends were identified for the risk of death from ischemic heart disease: the RRs were 1.59 (95% CI: 0.76-3.32) and 2.50 (95% CI: 1.28-4.89) (<it>P </it>= 0.006 for linear trend), and stroke: the RRs were 3.37 (95% CI: 0.95-11.90) and 5.81 (95% CI: 1.96-17.25) (<it>P </it>= 0.001 for linear trend).</p> <p>Conclusions</p> <p>In diabetics, higher serum non-HDL-C concentrations were significantly associated with increased risk of death from CVD. Our prospective data support the notion that reducing serum non-HDL-C concentrations may be beneficial in the prevention of excess death from CVD among affected adults.</p
- …