129 research outputs found
Evidence-based sizing of non-inferiority trials using decision models
Abstract
Background
There are significant challenges to the successful conduct of non-inferiority trials because they require large numbers to demonstrate that an alternative intervention is ānot too much worseā than the standard. In this paper, we present a novel strategy for designing non-inferiority trials using an approach for determining the appropriate non-inferiority margin (Ī“), which explicitly balances the benefits of interventions in the two arms of the study (e.g. lower recurrence rate or better survival) with the burden of interventions (e.g. toxicity, pain), and early and late-term morbidity.
Methods
We use a decision analytic approach to simulate a trial using a fixed value for the trial outcome of interest (e.g. cancer incidence or recurrence) under the standard intervention (pS) and systematically varying the incidence of the outcome in the alternative intervention (pA). The non-inferiority margin, pA ā pSā=āĪ“, is reached when the lower event rate of the standard therapy counterbalances the higher event rate but improved morbidity burden of the alternative. We consider the appropriate non-inferiority margin as the tipping point at which the quality-adjusted life-years saved in the two arms are equal.
Results
Using the European Polyp Surveillance non-inferiority trial as an example, our decision analytic approach suggests an appropriate non-inferiority margin, defined here as the difference between the two study arms in the 10-year risk of being diagnosed with colorectal cancer, of 0.42% rather than the 0.50% used to design the trial. The size of the non-inferiority margin was smaller for higher assumed burden of colonoscopies.
Conclusions
The example demonstrates that applying our proposed method appears feasible in real-world settings and offers the benefits of more explicit and rigorous quantification of the various considerations relevant for determining a non-inferiority margin and associated trial sample size.https://deepblue.lib.umich.edu/bitstream/2027.42/146777/1/12874_2018_Article_643.pd
Combining Information from Two Surveys to Estimate County-Level Prevalence Rates of Cancer Risk Factors and Screening
Cancer surveillance requires estimates of the prevalence of cancer risk factors and screening for small areas such as counties. Two popular data sources are the Behavioral Risk Factor Surveillance System (BRFSS), a telephone survey conducted by state agencies, and the National Health Interview Survey (NHIS), an area probability sample survey conducted through face-to-face interviews. Both data sources have advantages and disadvantages. The BRFSS is a larger survey, and almost every county is included in the survey; but it has lower response rates as is typical with telephone surveys, and it does not include subjects who live in households with no telephones. On the other hand, the NHIS is a smaller survey, with the majority of counties not included; but it includes both telephone and non-telephone households and has higher response rates. A preliminary analysis shows that the distributions of cancer screening and risk factors are different for telephone and non-telephone households. Thus, information from the two surveys may be combined to address both nonresponse and noncoverage errors. A hierarchical Bayesian approach that combines information from both surveys is used to construct county-level estimates. The proposed model incorporates potential noncoverage and nonresponse biases in the BRFSS as well as complex sample design features of both surveys. A Markov Chain Monte Carlo method is used to simulate draws from the joint posterior distribution of unknown quantities in the model based on the design-based direct estimates and county-level covariates. Yearly prevalence estimates at the county level for 49 states, as well as for the entire state of Alaska and the District of Columbia, are developed for six outcomes using BRFSS and NHIS data from the years 1997-2000. The outcomes include smoking and use of common cancer screening procedures. The NHIS/BRFSS combined county-level estimates are substantially different from those based on BRFSS alone
Confidence intervals for ranks of age-adjusted rates across states or counties
Health indices provide information to the general public on the health condition of the community. They can also be used to inform the governmentās policy making, to evaluate the effect of a current policy or healthcare program, or for program planning and priority setting. It is a common practice that the health indices across different geographic units are ranked and the ranks are reported as fixed values. We argue that the ranks should be viewed as random and hence should be accompanied by an indication of precision (i.e., the confidence intervals). A technical difficulty in doing so is how to account for the dependence among the ranks in the construction of confidence intervals. In this paper, we propose a novel Monte Carlo method for constructing the individual and simultaneous confidence intervals of ranks for age-adjusted rates. The proposed method uses as input age-specific counts (of cases of disease or deaths) and their associated populations. We have further extended it to the case in which only the age-adjusted rates and confidence intervals are available. Finally, we demonstrate the proposed method to analyze US age-adjusted cancer incidence rates and mortality rates for cancer and other diseases by states and counties within a state using a website that will be publicly available. The results show that for rare or relatively rare disease (especially at the county level), ranks are essentially meaningless because of their large variability, while for more common disease in larger geographic units, ranks can be effectively utilized
The Joinpoint-Jump and Joinpoint-Comparability Ratio Model for Trend Analysis with Applications to Coding Changes in Health Statistics
Analysis of trends in health data collected over time can be affected by instantaneous changes in coding that cause sudden increases/decreases, or ājumps,ā in data. Despite these sudden changes, the underlying continuous trends can present valuable information related to the changing risk profile of the population, the introduction of screening, new diagnostic technologies, or other causes. The joinpoint model is a well-established methodology for modeling trends over time using connected linear segments, usually on a logarithmic scale. Joinpoint models that ignore data jumps due to coding changes may produce biased estimates of trends. In this article, we introduce methods to incorporate a sudden discontinuous jump in an otherwise continuous joinpoint model. The size of the jump is either estimated directly (the Joinpoint-Jump model) or estimated using supplementary data (the Joinpoint-Comparability Ratio model). Examples using ICD-9/ICD-10 cause of death coding changes, and coding changes in the staging of cancer illustrate the use of these models
The Joinpoint-Jump and Joinpoint-Comparability Ratio Model for Trend Analysis with Applications to Coding Changes in Health Statistics
Analysis of trends in health data collected over time can be affected by instantaneous changes in coding that cause sudden increases/decreases, or ājumps,ā in data. Despite these sudden changes, the underlying continuous trends can present valuable information related to the changing risk profile of the population, the introduction of screening, new diagnostic technologies, or other causes. The joinpoint model is a well-established methodology for modeling trends over time using connected linear segments, usually on a logarithmic scale. Joinpoint models that ignore data jumps due to coding changes may produce biased estimates of trends. In this article, we introduce methods to incorporate a sudden discontinuous jump in an otherwise continuous joinpoint model. The size of the jump is either estimated directly (the Joinpoint-Jump model) or estimated using supplementary data (the Joinpoint-Comparability Ratio model). Examples using ICD-9/ICD-10 cause of death coding changes, and coding changes in the staging of cancer illustrate the use of these models
Recommended from our members
Using Gini coefficient to determining optimal cluster reporting sizes for spatial scan statistics
Background: Spatial and spaceātime scan statistics are widely used in disease surveillance to identify geographical areas of elevated disease risk and for the early detection of disease outbreaks. With a scan statistic, a scanning window of variable location and size moves across the map to evaluate thousands of overlapping windows as potential clusters, adjusting for the multiple testing. Almost always, the method will find many very similar overlapping clusters, and it is not useful to report all of them. This paper proposes to use the Gini coefficient to help select which of the many overlapping clusters to report. Methods: The Gini coefficient provides a quick and intuitive way to evaluate the degree of the heterogeneity of the collection of clusters, which is useful to explain how well the cluster collection reveal the underlying true cluster patterns. Using simulation studies and real cancer mortality data, it is compared with the traditional approach for reporting non-overlapping clusters. Results: The Gini coefficient can identify a more refined collection of non-overlapping clusters to report. For example, it is able to determine when it makes more sense to report a collection of smaller non-overlapping clusters versus a single large cluster containing all of them. It also fulfils a set of desirable theoretical properties, such as being invariant under a uniform multiplication of the population numbers by the same constant. Conclusions: The Gini coefficient can be used to determine which set of non-overlapping clusters to report. It has been implemented in the free SaTScanā¢ software version 9.3 (www.satscan.org)
The impact of overdiagnosis on the selection of efficient lung cancer screening strategies
Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136362/1/ijc30602_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136362/2/ijc30602.pd
Comparative economic evaluation of data from the ACRIN national CT colonography trial with three cancer intervention and surveillance modeling network microsimulations
Purpose: To estimate the cost-effectiveness of computed tomographic (CT) colonography for colorectal cancer (CRC) screening in average-risk asymptomatic subjects in the United States aged 50 years. Materials and Methods: Enrollees in the American College of Radiology Imaging Network National CT Colonography Trial provided informed consent, and approval was obtained from the institutional review board at each site. CT colonography performance estimates from the trial were incorporated into three Cancer Intervention and Surveillance Modeling Network CRC microsimulations. Simulated survival and lifetime costs for screening 50-year-old subjects in the United States with CT colonography every 5 or 10 years were compared with those for guideline-concordant screening with colonoscopy, flexible sigmoidoscopy plus either sensitive unrehydrated fecal occult blood testing (FOBT) or fecal immunochemical testing (FIT), and no screening. Perfect and reduced screening adherence scenarios were considered. Incremental cost-effectiveness and net health benefits were estimated from the U.S. health care sector perspective, assuming a 3% discount rate. Results: CT colonography at 5- and 10-year screening intervals was more costly and less effective than FOBT plus flexible sigmoidoscopy in all three models in both 100% and 50% adherence scenarios. Colonoscopy also was more costly and less effective than FOBT plus flexible sigmoidoscopy, except in the CRC-SPIN model assuming 100% adherence (incremental cost-effectiveness ratio: 50 000 per life-year gained. Conclusion: All three models predict CT colonography to be more costly and less effective than non-CT colonographic screening but net beneficial compared with no screening given model assumptions
Recommended from our members
Comparing Benefits from Many Possible Computed Tomography Lung Cancer Screening Programs: Extrapolating from the National Lung Screening Trial Using Comparative Modeling
Background: The National Lung Screening Trial (NLST) demonstrated that in current and former smokers aged 55 to 74 years, with at least 30 pack-years of cigarette smoking history and who had quit smoking no more than 15 years ago, 3 annual computed tomography (CT) screens reduced lung cancer-specific mortality by 20% relative to 3 annual chest X-ray screens. We compared the benefits achievable with 576 lung cancer screening programs that varied CT screen number and frequency, ages of screening, and eligibility based on smoking. Methods and Findings: We used five independent microsimulation models with lung cancer natural history parameters previously calibrated to the NLST to simulate life histories of the US cohort born in 1950 under all 576 programs. āEfficientā (within model) programs prevented the greatest number of lung cancer deaths, compared to no screening, for a given number of CT screens. Among 120 āconsensus efficientā (identified as efficient across models) programs, the average starting age was 55 years, the stopping age was 80 or 85 years, the average minimum pack-years was 27, and the maximum years since quitting was 20. Among consensus efficient programs, 11% to 40% of the cohort was screened, and 153 to 846 lung cancer deaths were averted per 100,000 people. In all models, annual screening based on age and smoking eligibility in NLST was not efficient; continuing screening to age 80 or 85 years was more efficient. Conclusions: Consensus results from five models identified a set of efficient screening programs that include annual CT lung cancer screening using criteria like NLST eligibility but extended to older ages. Guidelines for screening should also consider harms of screening and individual patient characteristics
- ā¦