48 research outputs found
A reference relative time-scale as an alternative to chronological age for cohorts with long follow-up
Background: Epidemiologists have debated the appropriate time-scale for cohort survival studies; chronological age or time-on-study being two such time-scales. Importantly, assessment of risk factors may depend on the choice of time-scale. Recently, chronological or attained age has gained support but a case can be made for a ‘reference relative time-scale’ as an alternative which circumvents difficulties that arise with this and other scales. The reference relative time of an individual participant is the integral of a reference population hazard function between time of entry and time of exit of the individual. The objective here is to describe the reference relative time-scale, illustrate its use, make comparison with attained age by simulation and explain its relationship to modern and traditional epidemiologic methods.
Results: A comparison was made between two models; a stratified Cox model with age as the time-scale versus an un-stratified Cox model using the reference relative time-scale. The illustrative comparison used a UK cohort of cotton workers, with differing ages at entry to the study, with accrual over a time period and with long follow-up. Additionally, exponential and Weibull models were fitted since the reference relative time-scale analysis need not be restricted to the Cox model. A simulation study showed that analysis using the reference relative time-scale and analysis using chronological age had very similar power to detect a significant risk factor and both were equally unbiased. Further, the analysis using the reference relative time-scale supported fully-parametric survival modelling and allowed percentile predictions and mortality curves to be constructed.
Conclusions: The reference relative time-scale was a viable alternative to chronological age, led to simplification of the modelling process and possessed the defined features of a good time-scale as defined in reliability theory. The reference relative time-scale has several interpretations and provides a unifying concept that links contemporary approaches in survival and reliability analysis to the traditional epidemiologic methods of Poisson regression and standardised mortality ratios. The community of practitioners has not previously made this connection
Assessment of the Variability in Influenza A(H1N1) Vaccine Effectiveness Estimates Dependent on Outcome and Methodological Approach
Estimation of Influenza vaccine effectiveness (VE) varies with study design, clinical outcome 10 considered and statistical methodology used. By estimating VE using differing outcomes and 11 statistical methods on the same cohort of individuals the variability in the estimates produced can 12 be better understood. The Pandemic Influenza Primary Care Reporting (PIPeR) cohort of approximately 193,000 individuals 14 was used to estimate pandemic VE in Scotland during season 2009-10. VE results for three 15 outcomes; influenza related consultations, virological confirmed influenza and death were 16 considered. Use of individualised records allowed all models to be adjusted for age, sex, 17 deprivation, risk status relating to chronic illnesses, seasonal vaccination status and a marker of the 18 individual’s propensity to consult. For the consultation and death outcomes, VE was calculated by 19 comparing consultation rates in the unvaccinated and vaccinated groups, adjusted for the listed 20 factors, using both Cox and Poisson regression models. For the consultation outcome, the 21 unvaccinated group was split into individuals before vaccination and those never vaccinated to allow 22 for potential differences in the health seeking behaviour of these groups. For the virology outcome 23 estimates were calculated using a generalised additive logistic regression model. All models were 24 adjusted for time. Vaccine effect was demonstrated for the influenza-like illness consultation outcome using the Cox 26 model (VE=49% 95% CI (19%, 67%)) with lower estimates from the model splitting the before and 27 never vaccinated groups (VE=34.2% with 95% CI (-0.5%, 58.9%)). Vaccine effect was also illustrated 28 for overall mortality (VE=40% (95% CI 18%, 56%)) and a virological confirmed subset of symptomatic 29 individuals (VE=60% (95% CI -38%, 89%))
Low dose radiation and cancer in A-bomb survivors: latency and non-linear dose-response in the 1950–90 mortality cohort
BACKGROUND: Analyses of Japanese A-bomb survivors' cancer mortality risks are used to establish recommended annual dose limits, currently set at 1 mSv (public) and 20 mSv (occupational). Do radiation doses below 20 mSv have significant impact on cancer mortality in Japanese A-bomb survivors, and is the dose-response linear? METHODS: I analyse stomach, liver, lung, colon, uterus, and all-solid cancer mortality in the 0 – 20 mSv colon dose subcohort of the 1950–90 (grouped) mortality cohort, by Poisson regression using a time-lagged colon dose to detect latency, while controlling for gender, attained age, and age-at-exposure. I compare linear and non-linear models, including one adapted from the cellular bystander effect for α particles. RESULTS: With a lagged linear model, Excess Relative Risk (ERR) for the liver and all-solid cancers is significantly positive and several orders of magnitude above extrapolations from the Life Span Study Report 12 analysis of the full cohort. Non-linear models are strongly superior to the linear model for the stomach (latency 11.89 years), liver (36.90), lung (13.60) and all-solid (43.86) in fitting the 0 – 20 mSv data and show significant positive ERR at 0.25 mSv and 10 mSv lagged dose. The slope of the dose-response near zero is several orders of magnitude above the slope at high doses. CONCLUSION: The standard linear model applied to the full 1950–90 cohort greatly underestimates the risks at low doses, which are significant when the 0 – 20 mSv subcohort is modelled with latency. Non-linear models give a much better fit and are compatible with a bystander effect
Establishing cytogenetic biodosimetry laboratory in Saudi Arabia and producing preliminary calibration curve of dicentric chromosomes as biomarker for medical dose estimation in response to radiation emergencies
Recommended from our members
Results of The Analysis of The Blood Beryllium Lymphocyte Proliferation Test Data From The Oak Ridge Y-12 Study
The potential hazards from exposure to beryllium or beryllium compounds in the workplace were first reported in the 1930s. The tritiated thymidine beryllium lymphocyte proliferation test (BeLPT) is an in vitro blood test that is widely used to screen beryllium exposed workers in the nuclear industry for sensitivity to beryllium. Newman [18] has discussed the clinical significance of the BeLPT and described a standard protocol that was developed in the late 1980s. Cell proliferation is measured by the incorporation of tritiated thymidine into dividing cells on two culture dates and using three concentrations of beryllium sulfate. Results are expressed as a ''stimulation index'' (SI) which is the ratio of the amount of tritiated thymidine (measured by beta counts) in the stimulated cells divided by the counts for the unstimulated cells on the same culture day. Several statistical methods for use in the routine analysis of the BeLPT were considered in the early 1990's by Frome et al. [7]. The least absolute values (LAV) method was recommended for routine analysis of the BeLPT. The purposes of this report are to further evaluate the LAV method using new data, and to describe a new method for identification of an abnormal or borderline test. This new statistical biological positive (SBP) method reflects the clinical judgment that (1) at least two SIs show a ''positive'' response to beryllium, and (2), that the maximum of the six SIs must exceed a cut point that is determined from a reference data set of normal individuals whose blood has been tested by the same method in the same serum. The new data is from the Y-12 facility in Oak Ridge and consist of 1080 worker and 33 nonexposed control BeLPTs (all tested in the same serum). Graphical results are presented to explain the statistical method, and the new SBP method is applied to the Y-12 group. The true positive rate and specificity of the new method were estimated to be 86 percent and 97 percent, respectively
Recommended from our members
Statistical Methods and Software for the Analysis of Occupational Exposure Data with Non-detectable Values
Environmental exposure measurements are, in general, positive and may be subject to left censoring; i.e,. the measured value is less than a ''detection limit''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. Parametric methods used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level, an upper percentile, and the exceedance fraction are used to characterize exposure levels, and confidence limits are used to describe the uncertainty in these estimates. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on an upper percentile (i.e., the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical data analysis and graphics has greatly enhanced the availability of high-quality nonproprietary (open source) software that serves as the basis for implementing the methods in this paper