340 research outputs found

    Regression Analysis In Longitudinal Studies With Non-ignorable Missing Outcomes

    Get PDF
    One difficulty in regression analysis for longitudinal data is that the outcomes are oftenmissing in a non-ignorable way (Little & Rubin, 1987). Likelihood based approaches todeal with non-ignorable missing outcomes can be divided into selection models and patternmixture models based on the way the joint distribution of the outcome and the missing-dataindicators is partitioned. One new approach from each of these two classes of models isproposed. In the first approach, a normal copula-based selection model is constructed tocombine the distribution of the outcome of interest and that of the missing-data indicatorsgiven the covariates. Parameters in the model are estimated by a pseudo maximum likelihoodmethod (Gong & Samaniego, 1981). In the second approach, a pseudo maximum likelihoodmethod introduced by Gourieroux et al. (1984) is used to estimate the identifiable parametersin a pattern mixture model. This procedure provides consistent estimators when the meanstructure is correctly specified for each pattern, with further information on the variancestructure giving an efficient estimator. A Hausman type test (Hausman, 1978) of modelmisspecification is also developed for model simplification to improve efficiency. Separatesimulations are carried out to assess the performance of the two approaches, followed byapplications to real data sets from an epidemiological cohort study investigating dementia,including Alzheimer's disease

    Inverse probability weighting for covariate adjustment in randomized studies

    Get PDF
    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented

    Estimation of treatment effect in a subpopulation: An empirical Bayes approach

    Get PDF
    It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented

    Doubly Robust Estimation of Causal Effect: Upping the Odds of Getting the Right Answers

    Get PDF
    Propensity score–based methods or multiple regressions of the outcome are often used for confounding adjustment in analysis of observational studies. In either approach, a model is needed: A model describing the relationship between the treatment assignment and covariates in the propensity score–based method or a model for the outcome and covariates in the multiple regressions. The 2 models are usually unknown to the investigators and must be estimated. The correct model specification, therefore, is essential for the validity of the final causal estimate. We describe in this article a doubly robust estimator which combines both models propitiously to offer analysts 2 chances for obtaining a valid causal estimate and demonstrate its use through a data set from the Lindner Center Study

    Model-based peak alignment of metabolomic profiling from comprehensive two-dimensional gas chromatography mass spectrometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS) has been used for metabolite profiling in metabolomics. However, there is still much experimental variation to be controlled including both within-experiment and between-experiment variation. For efficient analysis, an ideal peak alignment method to deal with such variations is in great need.</p> <p>Results</p> <p>Using experimental data of a mixture of metabolite standards, we demonstrated that our method has better performance than other existing method which is not model-based. We then applied our method to the data generated from the plasma of a rat, which also demonstrates good performance of our model.</p> <p>Conclusions</p> <p>We developed a model-based peak alignment method to process both homogeneous and heterogeneous experimental data. The unique feature of our method is the only model-based peak alignment method coupled with metabolite identification in an unified framework. Through the comparison with other existing method, we demonstrated that our method has better performance. Data are available at <url>http://stage.louisville.edu/faculty/x0zhan17/software/software-development/mspa</url>. The R source codes are available at <url>http://www.biostat.iupui.edu/~ChangyuShen/CodesPeakAlignment.zip</url>.</p> <p>Trial Registration</p> <p>2136949528613691</p

    An empirical Bayes model using a competition score for metabolite identification in gas chromatography mass spectrometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Mass spectrometry (MS) based metabolite profiling has been increasingly popular for scientific and biomedical studies, primarily due to recent technological development such as comprehensive two-dimensional gas chromatography time-of-flight mass spectrometry (GCxGC/TOF-MS). Nevertheless, the identifications of metabolites from complex samples are subject to errors. Statistical/computational approaches to improve the accuracy of the identifications and false positive estimate are in great need. We propose an empirical Bayes model which accounts for a competing score in addition to the similarity score to tackle this problem. The competition score characterizes the propensity of a candidate metabolite of being matched to some spectrum based on the metabolite's similarity score with other spectra in the library searched against. The competition score allows the model to properly assess the evidence on the presence/absence status of a metabolite based on whether or not the metabolite is matched to some sample spectrum.</p> <p>Results</p> <p>With a mixture of metabolite standards, we demonstrated that our method has better identification accuracy than other four existing methods. Moreover, our method has reliable false discovery rate estimate. We also applied our method to the data collected from the plasma of a rat and identified some metabolites from the plasma under the control of false discovery rate.</p> <p>Conclusions</p> <p>We developed an empirical Bayes model for metabolite identification and validated the method through a mixture of metabolite standards and rat plasma. The results show that our hierarchical model improves identification accuracy as compared with methods that do not structurally model the involved variables. The improvement in identification accuracy is likely to facilitate downstream analysis such as peak alignment and biomarker identification. Raw data and result matrices can be found at <url>http://www.biostat.iupui.edu/~ChangyuShen/index.htm</url></p> <p>Trial Registration</p> <p>2123938128573429</p

    Design and implementation of wire tension measurement system for MWPCs used in the STAR iTPC upgrade

    Full text link
    The STAR experiment at RHIC is planning to upgrade the Time Projection Chamber which lies at the heart of the detector. We have designed an instrument to measure the tension of the wires in the multi-wire proportional chambers (MWPCs) which will be used in the TPC upgrade. The wire tension measurement system causes the wires to vibrate and then it measures the fundamental frequency of the oscillation via a laser based optical platform. The platform can scan the entire wire plane, automatically, in a single run and obtain the wire tension on each wire with high precision. In this paper, the details about the measurement method and the system setup will be described. In addition, the test results for a prototype MWPC to be used in the STAR-iTPC upgrade will be presented.Comment: 6 pages, 10 figues, to appear in NIM

    Subgroup selection in adaptive signature designs of confirmatory clinical trials

    Get PDF
    The increasing awareness of treatment effect heterogeneity has motivated flexible designs of confirmatory clinical trials that prospectively allow investigators to test for treatment efficacy for a subpopulation of patients in addition to the entire population. If a target subpopulation is not well characterized in the design stage, it can be developed at the end of a broad eligibility trial under an adaptive signature design. The paper proposes new procedures for subgroup selection and treatment effect estimation (for the selected subgroup) under an adaptive signature design. We first provide a simple and general characterization of the optimal subgroup that maximizes the power for demonstrating treatment efficacy or the expected gain based on a specified utility function. This characterization motivates a procedure for subgroup selection that involves prediction modelling, augmented inverse probability weighting and low dimensional maximization. A cross-validation procedure can be used to remove or reduce any resubstitution bias that may result from subgroup selection, and a bootstrap procedure can be used to make inference about the treatment effect in the subgroup selected. The approach proposed is evaluated in simulation studies and illustrated with real examples

    A Penalized Mixture Model Approach in Genotype/Phenotype Association Analysis for Quantitative Phenotypes

    Get PDF
    A mixture normal model has been developed to partition genotypes in predicting quantitative phenotypes. Its estimation and inference are performed through an EM algorithm. This approach can conduct simultaneous genotype clustering and hypothesis testing. It is a valuable method for predicting the distribution of quantitative phenotypes among multi-locus genotypes across genes or within a gene. This mixture model’s performance is evaluated in data analyses for two pharmacogenetics studies. In one example, thirty five CYP2D6 genotypes were partitioned into three groups to predict pharmacokinetics of a breast cancer drug, Tamoxifen, a CYP2D6 substrate (p-value = 0.04). In a second example, seventeen CYP2B6 genotypes were categorized into three clusters to predict CYP2B6 protein expression (p-value = 0.002). The biological validities of both partitions are examined using established function of CYP2D6 and CYP2B6 alleles. In both examples, we observed genotypes clustered in the same group to have high functional similarities. The power and recovery rate of the true partition for the mixture model approach are investigated in statistical simulation studies, where it outperforms another published method
    corecore