181 research outputs found

    Spatial normalization improves the quality of genotype calling for Affymetrix SNP 6.0 arrays

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Microarray measurements are susceptible to a variety of experimental artifacts, some of which give rise to systematic biases that are spatially dependent in a unique way on each chip. It is likely that such artifacts affect many SNP arrays, but the normalization methods used in currently available genotyping algorithms make no attempt at spatial bias correction. Here, we propose an effective single-chip spatial bias removal procedure for Affymetrix 6.0 SNP arrays or platforms with similar design features. This procedure deals with both extreme and subtle biases and is intended to be applied before standard genotype calling algorithms.</p> <p>Results</p> <p>Application of the spatial bias adjustments on HapMap samples resulted in higher genotype call rates with equal or even better accuracy for thousands of SNPs. Consequently the normalization procedure is expected to lead to more meaningful biological inferences and could be valuable for genome-wide SNP analysis.</p> <p>Conclusions</p> <p>Spatial normalization can potentially rescue thousands of SNPs in a genetic study at the small cost of computational time. The approach is implemented in R and available from the authors upon request.</p

    Analysis of time-to-event for observational studies: Guidance to the use of intensity models

    Full text link
    This paper provides guidance for researchers with some mathematical background on the conduct of time-to-event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time-dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative.Comment: 28 pages, 12 figures. For associated Supplementary material, see http://publicifsv.sund.ku.dk/~pka/STRATOSTG8

    A Bayesian Approach to Multi-State Hidden Markov Models: Application to Dementia Progression

    Get PDF
    People are living longer than ever before, and with this arises new complications and challenges for humanity. Among the most pressing of these challenges is of understanding the role of aging in the development of dementia. This paper is motivated by the Mayo Clinic Study of Aging data for 4742 subjects since 2004, and how it can be used to draw inference on the role of aging in the development of dementia. We construct a hidden Markov model (HMM) to represent progression of dementia from states associated with the buildup of amyloid plaque in the brain, and the loss of cortical thickness. A hierarchical Bayesian approach is taken to estimate the parameters of the HMM with a truly time-inhomogeneous infinitesimal generator matrix, and response functions of the continuous-valued biomarker measurements are cut-point agnostic. A Bayesian approach with these features could be useful in many disease progression models. Additionally, an approach is illustrated for correcting a common bias in delayed enrollment studies, in which some or all subjects are not observed at baseline. Standard software is incapable of accounting for this critical feature, so code to perform the estimation of the model described below is made available online

    Adjusted Survival Curves

    Get PDF

    Quality assessment metrics for whole genome gene expression profiling of paraffin embedded samples

    Get PDF
    BACKGROUND: Formalin fixed, paraffin embedded tissues are most commonly used for routine pathology analysis and for long term tissue preservation in the clinical setting. Many institutions have large archives of Formalin fixed, paraffin embedded tissues that provide a unique opportunity for understanding genomic signatures of disease. However, genome-wide expression profiling of Formalin fixed, paraffin embedded samples have been challenging due to RNA degradation. Because of the significant heterogeneity in tissue quality, normalization and analysis of these data presents particular challenges. The distribution of intensity values from archival tissues are inherently noisy and skewed due to differential sample degradation raising two primary concerns; whether a highly skewed array will unduly influence initial normalization of the data and whether outlier arrays can be reliably identified. FINDINGS: Two simple extensions of common regression diagnostic measures are introduced that measure the stress an array undergoes during normalization and how much a given array deviates from the remaining arrays post-normalization. These metrics are applied to a study involving 1618 formalin-fixed, paraffin-embedded HER2-positive breast cancer samples from the N9831 adjuvant trial processed with Illumina’s cDNA-mediated Annealing Selection extension and Ligation assay. CONCLUSION: Proper assessment of array quality within a research study is crucial for controlling unwanted variability in the data. The metrics proposed in this paper have direct biological interpretations and can be used to identify arrays that should either be removed from analysis all together or down-weighted to reduce their influence in downstream analyses

    Factors associated with hospital readmission in sickle cell disease

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Sickle cell disease is the most frequent hereditary disease in Brazil, and people with the disease may be hospitalised several times in the course of their lives. The purpose of this study was to estimate the hazard ratios of factors associated with the time between hospital admissions.</p> <p>Methods</p> <p>The study sample comprised all patients admitted, from 2000 to 2004, to a university hospital in Rio de Janeiro State, south-east Brazil, as a result of acute complications from sickle cell disease (SCD). Considering the statistical problem of studying individuals with multiple events over time, the following extensions of Cox's proportional hazard ratio model were compared: the independent increment marginal model (Andersen-Gill) and the random effects model.</p> <p>Results</p> <p>The study considered 71 patients, who were admitted 223 times for acute events related to SCD. The hazard ratios for hospital readmission were statistically significant for the prior occurrence of vaso-occlusive crisis and development of renal failure. However, analysis of residuals of the marginal model revealed evidence of non-proportionality for some covariates.</p> <p>Conclusion</p> <p>the results from applying the two models were generally similar, indicating that the findings are not highly sensitive to different approaches. The better fit by the frailty model suggests that there are unmeasured individual factors with impact on hospital readmission.</p

    3' tag digital gene expression profiling of human brain and universal reference RNA using Illumina Genome Analyzer

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Massive parallel sequencing has the potential to replace microarrays as the method for transcriptome profiling. Currently there are two protocols: full-length RNA sequencing (RNA-SEQ) and 3'-tag digital gene expression (DGE). In this preliminary effort, we evaluated the 3' DGE approach using two reference RNA samples from the MicroArray Quality Control Consortium (MAQC).</p> <p>Results</p> <p>Using Brain RNA sample from multiple runs, we demonstrated that the transcript profiles from 3' DGE were highly reproducible between technical and biological replicates from libraries constructed by the same lab and even by different labs, and between two generations of Illumina's Genome Analyzers. Approximately 65% of all sequence reads mapped to mitochondrial genes, ribosomal RNAs, and canonical transcripts. The expression profiles of brain RNA and universal human reference RNA were compared which demonstrated that DGE was also highly quantitative with excellent correlation of differential expression with quantitative real-time PCR. Furthermore, one lane of 3' DGE sequencing, using the current sequencing chemistry and image processing software, had wider dynamic range for transcriptome profiling and was able to detect lower expressed genes which are normally below the detection threshold of microarrays.</p> <p>Conclusion</p> <p>3' tag DGE profiling with massive parallel sequencing achieved high sensitivity and reproducibility for transcriptome profiling. Although it lacks the ability of detecting alternative splicing events compared to RNA-SEQ, it is much more affordable and clearly out-performed microarrays (Affymetrix) in detecting lower abundant transcripts.</p

    Defining imaging biomarker cut points for brain aging and Alzheimer's disease

    Get PDF
    AbstractIntroductionOur goal was to develop cut points for amyloid positron emission tomography (PET), tau PET, flouro-deoxyglucose (FDG) PET, and MRI cortical thickness.MethodsWe examined five methods for determining cut points.ResultsThe reliable worsening method produced a cut point only for amyloid PET. The specificity, sensitivity, and accuracy of cognitively impaired versus young clinically normal (CN) methods labeled the most people abnormal and all gave similar cut points for tau PET, FDG PET, and cortical thickness. Cut points defined using the accuracy of cognitively impaired versus age-matched CN method labeled fewer people abnormal.DiscussionIn the future, we will use a single cut point for amyloid PET (standardized uptake value ratio, 1.42; centiloid, 19) based on the reliable worsening cut point method. We will base lenient cut points for tau PET, FDG PET, and cortical thickness on the accuracy of cognitively impaired versus young CN method and base conservative cut points on the accuracy of cognitively impaired versus age-matched CN method
    corecore