28 research outputs found

    Biological variability dominates and influences analytical variance in HPLC-ECD studies of the human plasma metabolome

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Biomarker-based assessments of biological samples are widespread in clinical, pre-clinical, and epidemiological investigations. We previously developed serum metabolomic profiles assessed by HPLC-separations coupled with coulometric array detection that can accurately identify <it>ad libitum </it>fed and caloric-restricted rats. These profiles are being adapted for human epidemiology studies, given the importance of energy balance in human disease.</p> <p>Methods</p> <p>Human plasma samples were biochemically analyzed using HPLC separations coupled with coulometric electrode array detection.</p> <p>Results</p> <p>We identified these markers/metabolites in human plasma, and then used them to determine which human samples represent blinded duplicates with 100% accuracy (N = 30 of 30). At least 47 of 61 metabolites tested were sufficiently stable for use even after 48 hours of exposure to shipping conditions. Stability of some metabolites differed between individuals (N = 10 at 0, 24, and 48 hours), suggesting the influence of some biological factors on parameters normally considered as analytical.</p> <p>Conclusion</p> <p>Overall analytical precision (mean median CV, ~9%) and total between-person variation (median CV, ~50–70%) appear well suited to enable use of metabolomics markers in human clinical trials and epidemiological studies, including studies of the effect of caloric intake and balance on long-term cancer risk.</p

    Regression calibration with more surrogates than mismeasured variables

    Full text link
    In a recent paper (Weller EA, Milton DK, Eisen EA, Spiegelman D. Regression calibration for logistic regression with multiple surrogates for one exposure. Journal of Statistical Planning and Inference 2007; 137: 449-461), the authors discussed fitting logistic regression models when a scalar main explanatory variable is measured with error by several surrogates, that is, a situation with more surrogates than variables measured with error. They compared two methods of adjusting for measurement error using a regression calibration approximate model as if it were exact. One is the standard regression calibration approach consisting of substituting an estimated conditional expectation of the true covariate given observed data in the logistic regression. The other is a novel two-stage approach when the logistic regression is fitted to multiple surrogates, and then a linear combination of estimated slopes is formed as the estimate of interest. Applying estimated asymptotic variances for both methods in a single data set with some sensitivity analysis, the authors asserted superiority of their two-stage approach. We investigate this claim in some detail. A troubling aspect of the proposed two-stage method is that, unlike standard regression calibration and a natural form of maximum likelihood, the resulting estimates are not invariant to reparameterization of nuisance parameters in the model. We show, however, that, under the regression calibration approximation, the two-stage method is asymptotically equivalent to a maximum likelihood formulation, and is therefore in theory superior to standard regression calibration. However, our extensive finite-sample simulations in the practically important parameter space where the regression calibration model provides a good approximation failed to uncover such superiority of the two-stage method. We also discuss extensions to different data structures. © 2012 John Wiley & Sons, Ltd

    Measurement error models with interactions

    Full text link
    © 2015 Published by Oxford University Press 2015. An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (WW) is a linear function of the unobserved true covariate (XX) plus other covariates (ZZ) in the regression model. In this paper, we consider models for WW that include interactions between XX and ZZ. We derive the conditional distribution of XX given WW and ZZ and use it to extend the method of regression calibration to this class of measurement error models. We apply the model to dietary data and test whether self-reported dietary intake includes an interaction between true intake and body mass index. We also perform simulations to compare the model to simpler approximate calibration models

    A bivariate measurement error model for semicontinuous and continuous variables: Application to nutritional epidemiology

    Full text link
    © 2016, The International Biometric Society. Semicontinuous data in the form of a mixture of a large portion of zero values and continuously distributed positive values frequently arise in many areas of biostatistics. This article is motivated by the analysis of relationships between disease outcomes and intakes of episodically consumed dietary components. An important aspect of studies in nutritional epidemiology is that true diet is unobservable and commonly evaluated by food frequency questionnaires with substantial measurement error. Following the regression calibration approach for measurement error correction, unknown individual intakes in the risk model are replaced by their conditional expectations given mismeasured intakes and other model covariates. Those regression calibration predictors are estimated using short-term unbiased reference measurements in a calibration substudy. Since dietary intakes are often "energy-adjusted," e.g., by using ratios of the intake of interest to total energy intake, the correct estimation of the regression calibration predictor for each energy-adjusted episodically consumed dietary component requires modeling short-term reference measurements of the component (a semicontinuous variable), and energy (a continuous variable) simultaneously in a bivariate model. In this article, we develop such a bivariate model, together with its application to regression calibration. We illustrate the new methodology using data from the NIH-AARP Diet and Health Study (Schatzkin et al., 2001, American Journal of Epidemiology 154, 1119-1125), and also evaluate its performance in a simulation study

    Validating an FFQ for intake of episodically consumed foods: Application to the National Institutes of Health-AARP Diet and Health Study

    Full text link
    Objective To develop a method to validate an FFQ for reported intake of episodically consumed foods when the reference instrument measures short-term intake, and to apply the method in a large prospective cohort.Design The FFQ was evaluated in a sub-study of cohort participants who, in addition to the questionnaire, were asked to complete two non-consecutive 24 h dietary recalls (24HR). FFQ-reported intakes of twenty-nine food groups were analysed using a two-part measurement error model that allows for non-consumption on a given day, using 24HR as a reference instrument under the assumption that 24HR is unbiased for true intake at the individual level.Setting The National Institutes of Health-AARP Diet and Health Study, a cohort of 567 169 participants living in the USA and aged 50-71 years at baseline in 1995.Subjects A sub-study of the cohort consisting of 2055 participants.Results Estimated correlations of true and FFQ-reported energy-adjusted intakes were 0·5 or greater for most of the twenty-nine food groups evaluated, and estimated attenuation factors (a measure of bias in estimated diet-disease associations) were 0·4 or greater for most food groups.Conclusions The proposed methodology extends the class of foods and nutrients for which an FFQ can be evaluated in studies with short-term reference instruments. Although violations of the assumption that the 24HR is unbiased could be inflating some of the observed correlations and attenuation factors, results suggest that the FFQ is suitable for testing many, but not all, diet-disease hypotheses in a cohort of this size. © 2011 The Authors

    The impact of stratification by implausible energy reporting status on estimates of diet-health relationships

    Full text link
    © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim The food frequency questionnaire (FFQ) is known to be prone to measurement error. Researchers have suggested excluding implausible energy reporters (IERs) of FFQ total energy when examining the relationship between a health outcome and FFQ-reported intake to obtain less biased estimates of the effect of the error-prone measure of exposure; however, the statistical properties of stratifying by IER status have not been studied. Under certain assumptions, including nondifferential error, we show that when stratifying by IER status, the attenuation of the estimated relative risk in the stratified models will be either greater or less in both strata (implausible and plausible reporters) than for the nonstratified model, contrary to the common belief that the attenuation will be less among plausible reporters and greater among IERs. Whether there is more or less attenuation depends on the pairwise correlations between true exposure, observed exposure, and the stratification variable. Thus exclusion of IERs is inadvisable but stratification by IER status can sometimes help. We also address the case of differential error. Examples from the Observing Protein and Energy Nutrition Study and simulations illustrate these results

    A statistical model for measurement error that incorporates variation over time in the target measure, with application to nutritional epidemiology

    Full text link
    & Sons, Ltd. Most statistical methods that adjust analyses for measurement error assume that the target exposure T is a fixed quantity for each individual. However, in many applications, the value of T for an individual varies with time. We develop a model that accounts for such variation, describing the model within the framework of a meta-analysis of validation studies of dietary self-report instruments, where the reference instruments are biomarkers. We demonstrate that in this application, the estimates of the attenuation factor and correlation with true intake, key parameters quantifying the accuracy of the self-report instrument, are sometimes substantially modified under the time-varying exposure model compared with estimates obtained under a traditional fixed-exposure model. We conclude that accounting for the time element in measurement error problems is potentially important. © 2015 John Wile

    Fitting a bivariate measurement error model for episodically consumed dietary components

    Full text link
    There has been great public health interest in estimating usual, i.e., long-term average, intake of episodically consumed dietary components that are not consumed daily by everyone, e.g., fish, red meat and whole grains. Short-term measurements of episodically consumed dietary components have zero-inflated skewed distributions. So-called two-part models have been developed for such data in order to correct for measurement error due to within-person variation and to estimate the distribution of usual intake of the dietary component in the univariate case. However, there is arguably much greater public health interest in the usual intake of an episodically consumed dietary component adjusted for energy (caloric) intake, e.g., ounces of whole grains per 1000 kilo-calories, which reflects usual dietary composition and adjusts for different total amounts of caloric intake. Because of this public health interest, it is important to have models to fit such data, and it is important that the model-fitting methods can be applied to all episodically consumed dietary components. We have recently developed a nonlinear mixed effects model (Kipnis, et al., 2010), and have fit it by maximum likelihood using nonlinear mixed effects programs and methodology (the SAS NLMIXED procedure). Maximum likelihood fitting of such a nonlinear mixed model is generally slow because of 3-dimensional adaptive Gaussian quadrature, and there are times when the programs either fail to converge or converge to models with a singular covariance matrix. For these reasons, we develop a Monte-Carlo (MCMC) computation of fitting this model, which allows for both frequentist and Bayesian inference. There are technical challenges to developing this solution because one of the covariance matrices in the model is patterned. Our main application is to the National Institutes of Health (NIH)-AARP Diet and Health Study, where we illustrate our methods for modeling the energy-adjusted usual intake of fish and whole grains. We demonstrate numerically that our methods lead to increased speed of computation, converge to reasonable solutions, and have the flexibility to be used in either a frequentist or a Bayesian manner. © 2011 Berkeley Electronic Press. All rights reserved

    Taking advantage of the strengths of 2 different dietary assessment instruments to improve intake estimates for nutritional epidemiology

    Full text link
    With the advent of Internet-based 24-hour recall (24HR) instruments, it is now possible to envision their use in cohort studies investigating the relation between nutrition and disease. Understanding that all dietary assessment instruments are subject to measurement errors and correcting for them under the assumption that the 24HR is unbiased for usual intake, here the authors simultaneously address precision, power, and sample size under the following 3 conditions: 1) 1-12 24HRs; 2) a single calibrated food frequency questionnaire (FFQ); and 3) a combination of 24HR and FFQ data. Using data from the Eating at America's Table Study (1997-1998), the authors found that 4-6 administrations of the 24HR is optimal for most nutrients and food groups and that combined use of multiple 24HR and FFQ data sometimes provides data superior to use of either method alone, especially for foods that are not regularly consumed. For all food groups but the most rarely consumed, use of 2-4 recalls alone, with or without additional FFQ data, was superior to use of FFQ data alone. Thus, if self-administered automated 24HRs are to be used in cohort studies, 4-6 administrations of the 24HR should be considered along with administration of an FFQ. © 2012 The Author
    corecore