424 research outputs found
Recommended from our members
Precision Medicine: From Science To Value.
Precision medicine is making an impact on patients, health care delivery systems, and research participants in ways that were only imagined fifteen years ago when the human genome was first sequenced. Discovery of disease-causing and drug-response genetic variants has accelerated, while adoption into clinical medicine has lagged. We define precision medicine and the stakeholder community required to enable its integration into research and health care. We explore the intersection of data science, analytics, and precision medicine in the formation of health systems that carry out research in the context of clinical care and that optimize the tools and information used to deliver improved patient outcomes. We provide examples of real-world impact and conclude with a policy and economic agenda necessary for the adoption of this new paradigm of health care both in the United States and globally
Do we undertreat hyperlipidemia? The use of lipid-lowering agents in patients with coronary artery disease
Taking Cardiovascular Genetic Association Studies to the Next Level
Genetic information is beginning to have a direct impact on patient care and it is important that cardiologists appreciate the value and approaches to associating genetic variation and health outcomes. Genetic associations should be based on compelling genetic and biological hypotheses and should be statistically sound so as to reduce the possibility of “false discovery” in the setting of testing multiple hypotheses. Study designs should clearly define cases and controls and measurement of phenotypes. Finally, findings should be replicated in at least 1 independent cohort. Consideration of these principles should provide insight into disease biology based on genetic findings and encourage their meaningful adoption into clinical practice
Latent protein trees
Unbiased, label-free proteomics is becoming a powerful technique for
measuring protein expression in almost any biological sample. The output of
these measurements after preprocessing is a collection of features and their
associated intensities for each sample. Subsets of features within the data are
from the same peptide, subsets of peptides are from the same protein, and
subsets of proteins are in the same biological pathways, therefore, there is
the potential for very complex and informative correlational structure inherent
in these data. Recent attempts to utilize this data often focus on the
identification of single features that are associated with a particular
phenotype that is relevant to the experiment. However, to date, there have been
no published approaches that directly model what we know to be multiple
different levels of correlation structure. Here we present a hierarchical
Bayesian model which is specifically designed to model such correlation
structure in unbiased, label-free proteomics. This model utilizes partial
identification information from peptide sequencing and database lookup as well
as the observed correlation in the data to appropriately compress features into
latent proteins and to estimate their correlation structure. We demonstrate the
effectiveness of the model using artificial/benchmark data and in the context
of a series of proteomics measurements of blood plasma from a collection of
volunteers who were infected with two different strains of viral influenza.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS639 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Novel—and “Neu”—Therapeutic Possibilities for Heart Failure⁎⁎Editorials published in the Journal of the American College of Cardiologyreflect the views of the authors and do not necessarily represent the views of JACCor the American College of Cardiology.
Lessons learned from pre-implementation activities to integrate a web-based personalized health risk assessment program in diverse primary care settings
Consideration of patient preferences and challenges in storage and access of pharmacogenetic test results
Pharmacogenetic (PGx) testing is one of the primary drivers of personalized medicine. The use of PGx testing may provide a lifetime of benefits through tailoring drug dosing and selection of multiple medications to improve therapeutic outcomes and reduce adverse responses. We aimed to assess public interest and concerns regarding sharing and storage of PGx test results that would facilitate the re-use of PGx data across a lifetime of care
Unsupervised Bayesian linear unmixing of gene expression microarrays
Background: This paper introduces a new constrained model and the corresponding algorithm, called unsupervised Bayesian linear unmixing (uBLU), to identify biological signatures from high dimensional assays like gene expression microarrays. The basis for uBLU is a Bayesian model for the data samples which are represented as an additive mixture of random positive gene signatures, called factors, with random positive mixing coefficients, called factor scores, that specify the relative contribution of each signature to a specific sample. The particularity of the proposed method is that uBLU constrains the factor loadings to be non-negative and the factor scores to be probability distributions over the factors. Furthermore, it also provides estimates of the number of factors. A Gibbs sampling strategy is adopted here to generate random samples according to the posterior distribution of the factors, factor scores, and number of factors. These samples are then used to estimate all the unknown parameters. Results: Firstly, the proposed uBLU method is applied to several simulated datasets with known ground truth and compared with previous factor decomposition methods, such as principal component analysis (PCA), non negative matrix factorization (NMF), Bayesian factor regression modeling (BFRM), and the gradient-based algorithm for general matrix factorization (GB-GMF). Secondly, we illustrate the application of uBLU on a real time-evolving gene expression dataset from a recent viral challenge study in which individuals have been inoculated with influenza A/H3N2/Wisconsin. We show that the uBLU method significantly outperforms the other methods on the simulated and real data sets considered here. Conclusions: The results obtained on synthetic and real data illustrate the accuracy of the proposed uBLU method when compared to other factor decomposition methods from the literature (PCA, NMF, BFRM, and GB-GMF). The uBLU method identifies an inflammatory component closely associated with clinical symptom scores collected during the study. Using a constrained model allows recovery of all the inflammatory genes in a single factor
- …