140 research outputs found

    Comparison of tests for spatial heterogeneity on data with global clustering patterns and outliers

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The ability to evaluate geographic heterogeneity of cancer incidence and mortality is important in cancer surveillance. Many statistical methods for evaluating global clustering and local cluster patterns are developed and have been examined by many simulation studies. However, the performance of these methods on two extreme cases (global clustering evaluation and local anomaly (outlier) detection) has not been thoroughly investigated.</p> <p>Methods</p> <p>We compare methods for global clustering evaluation including Tango's Index, Moran's <it>I</it>, and Oden's <it>I</it>*<sub><it>pop</it></sub>; and cluster detection methods such as local Moran's <it>I </it>and SaTScan elliptic version on simulated count data that mimic global clustering patterns and outliers for cancer cases in the continental United States. We examine the power and precision of the selected methods in the purely spatial analysis. We illustrate Tango's MEET and SaTScan elliptic version on a 1987-2004 HIV and a 1950-1969 lung cancer mortality data in the United States.</p> <p>Results</p> <p>For simulated data with outlier patterns, Tango's MEET, Moran's <it>I </it>and <it>I</it>*<sub><it>pop </it></sub>had powers less than 0.2, and SaTScan had powers around 0.97. For simulated data with global clustering patterns, Tango's MEET and <it>I</it>*<sub><it>pop </it></sub>(with 50% of total population as the maximum search window) had powers close to 1. SaTScan had powers around 0.7-0.8 and Moran's <it>I </it>has powers around 0.2-0.3. In the real data example, Tango's MEET indicated the existence of global clustering patterns in both the HIV and lung cancer mortality data. SaTScan found a large cluster for HIV mortality rates, which is consistent with the finding from Tango's MEET. SaTScan also found clusters and outliers in the lung cancer mortality data.</p> <p>Conclusion</p> <p>SaTScan elliptic version is more efficient for outlier detection compared with the other methods evaluated in this article. Tango's MEET and Oden's <it>I</it>*<sub><it>pop </it></sub>perform best in global clustering scenarios among the selected methods. The use of SaTScan for data with global clustering patterns should be used with caution since SatScan may reveal an incorrect spatial pattern even though it has enough power to reject a null hypothesis of homogeneous relative risk. Tango's method should be used for global clustering evaluation instead of SaTScan.</p

    Evidence-based sizing of non-inferiority trials using decision models

    Full text link
    Abstract Background There are significant challenges to the successful conduct of non-inferiority trials because they require large numbers to demonstrate that an alternative intervention is “not too much worse” than the standard. In this paper, we present a novel strategy for designing non-inferiority trials using an approach for determining the appropriate non-inferiority margin (δ), which explicitly balances the benefits of interventions in the two arms of the study (e.g. lower recurrence rate or better survival) with the burden of interventions (e.g. toxicity, pain), and early and late-term morbidity. Methods We use a decision analytic approach to simulate a trial using a fixed value for the trial outcome of interest (e.g. cancer incidence or recurrence) under the standard intervention (pS) and systematically varying the incidence of the outcome in the alternative intervention (pA). The non-inferiority margin, pA – pS = δ, is reached when the lower event rate of the standard therapy counterbalances the higher event rate but improved morbidity burden of the alternative. We consider the appropriate non-inferiority margin as the tipping point at which the quality-adjusted life-years saved in the two arms are equal. Results Using the European Polyp Surveillance non-inferiority trial as an example, our decision analytic approach suggests an appropriate non-inferiority margin, defined here as the difference between the two study arms in the 10-year risk of being diagnosed with colorectal cancer, of 0.42% rather than the 0.50% used to design the trial. The size of the non-inferiority margin was smaller for higher assumed burden of colonoscopies. Conclusions The example demonstrates that applying our proposed method appears feasible in real-world settings and offers the benefits of more explicit and rigorous quantification of the various considerations relevant for determining a non-inferiority margin and associated trial sample size.https://deepblue.lib.umich.edu/bitstream/2027.42/146777/1/12874_2018_Article_643.pd

    Combining Information from Two Surveys to Estimate County-Level Prevalence Rates of Cancer Risk Factors and Screening

    Get PDF
    Cancer surveillance requires estimates of the prevalence of cancer risk factors and screening for small areas such as counties. Two popular data sources are the Behavioral Risk Factor Surveillance System (BRFSS), a telephone survey conducted by state agencies, and the National Health Interview Survey (NHIS), an area probability sample survey conducted through face-to-face interviews. Both data sources have advantages and disadvantages. The BRFSS is a larger survey, and almost every county is included in the survey; but it has lower response rates as is typical with telephone surveys, and it does not include subjects who live in households with no telephones. On the other hand, the NHIS is a smaller survey, with the majority of counties not included; but it includes both telephone and non-telephone households and has higher response rates. A preliminary analysis shows that the distributions of cancer screening and risk factors are different for telephone and non-telephone households. Thus, information from the two surveys may be combined to address both nonresponse and noncoverage errors. A hierarchical Bayesian approach that combines information from both surveys is used to construct county-level estimates. The proposed model incorporates potential noncoverage and nonresponse biases in the BRFSS as well as complex sample design features of both surveys. A Markov Chain Monte Carlo method is used to simulate draws from the joint posterior distribution of unknown quantities in the model based on the design-based direct estimates and county-level covariates. Yearly prevalence estimates at the county level for 49 states, as well as for the entire state of Alaska and the District of Columbia, are developed for six outcomes using BRFSS and NHIS data from the years 1997-2000. The outcomes include smoking and use of common cancer screening procedures. The NHIS/BRFSS combined county-level estimates are substantially different from those based on BRFSS alone

    Confidence intervals for ranks of age-adjusted rates across states or counties

    Get PDF
    Health indices provide information to the general public on the health condition of the community. They can also be used to inform the government’s policy making, to evaluate the effect of a current policy or healthcare program, or for program planning and priority setting. It is a common practice that the health indices across different geographic units are ranked and the ranks are reported as fixed values. We argue that the ranks should be viewed as random and hence should be accompanied by an indication of precision (i.e., the confidence intervals). A technical difficulty in doing so is how to account for the dependence among the ranks in the construction of confidence intervals. In this paper, we propose a novel Monte Carlo method for constructing the individual and simultaneous confidence intervals of ranks for age-adjusted rates. The proposed method uses as input age-specific counts (of cases of disease or deaths) and their associated populations. We have further extended it to the case in which only the age-adjusted rates and confidence intervals are available. Finally, we demonstrate the proposed method to analyze US age-adjusted cancer incidence rates and mortality rates for cancer and other diseases by states and counties within a state using a website that will be publicly available. The results show that for rare or relatively rare disease (especially at the county level), ranks are essentially meaningless because of their large variability, while for more common disease in larger geographic units, ranks can be effectively utilized

    The Joinpoint-Jump and Joinpoint-Comparability Ratio Model for Trend Analysis with Applications to Coding Changes in Health Statistics

    Get PDF
    Analysis of trends in health data collected over time can be affected by instantaneous changes in coding that cause sudden increases/decreases, or “jumps,” in data. Despite these sudden changes, the underlying continuous trends can present valuable information related to the changing risk profile of the population, the introduction of screening, new diagnostic technologies, or other causes. The joinpoint model is a well-established methodology for modeling trends over time using connected linear segments, usually on a logarithmic scale. Joinpoint models that ignore data jumps due to coding changes may produce biased estimates of trends. In this article, we introduce methods to incorporate a sudden discontinuous jump in an otherwise continuous joinpoint model. The size of the jump is either estimated directly (the Joinpoint-Jump model) or estimated using supplementary data (the Joinpoint-Comparability Ratio model). Examples using ICD-9/ICD-10 cause of death coding changes, and coding changes in the staging of cancer illustrate the use of these models

    The Joinpoint-Jump and Joinpoint-Comparability Ratio Model for Trend Analysis with Applications to Coding Changes in Health Statistics

    Get PDF
    Analysis of trends in health data collected over time can be affected by instantaneous changes in coding that cause sudden increases/decreases, or “jumps,” in data. Despite these sudden changes, the underlying continuous trends can present valuable information related to the changing risk profile of the population, the introduction of screening, new diagnostic technologies, or other causes. The joinpoint model is a well-established methodology for modeling trends over time using connected linear segments, usually on a logarithmic scale. Joinpoint models that ignore data jumps due to coding changes may produce biased estimates of trends. In this article, we introduce methods to incorporate a sudden discontinuous jump in an otherwise continuous joinpoint model. The size of the jump is either estimated directly (the Joinpoint-Jump model) or estimated using supplementary data (the Joinpoint-Comparability Ratio model). Examples using ICD-9/ICD-10 cause of death coding changes, and coding changes in the staging of cancer illustrate the use of these models

    The impact of overdiagnosis on the selection of efficient lung cancer screening strategies

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/136362/1/ijc30602_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/136362/2/ijc30602.pd

    Comparative economic evaluation of data from the ACRIN national CT colonography trial with three cancer intervention and surveillance modeling network microsimulations

    Get PDF
    Purpose: To estimate the cost-effectiveness of computed tomographic (CT) colonography for colorectal cancer (CRC) screening in average-risk asymptomatic subjects in the United States aged 50 years. Materials and Methods: Enrollees in the American College of Radiology Imaging Network National CT Colonography Trial provided informed consent, and approval was obtained from the institutional review board at each site. CT colonography performance estimates from the trial were incorporated into three Cancer Intervention and Surveillance Modeling Network CRC microsimulations. Simulated survival and lifetime costs for screening 50-year-old subjects in the United States with CT colonography every 5 or 10 years were compared with those for guideline-concordant screening with colonoscopy, flexible sigmoidoscopy plus either sensitive unrehydrated fecal occult blood testing (FOBT) or fecal immunochemical testing (FIT), and no screening. Perfect and reduced screening adherence scenarios were considered. Incremental cost-effectiveness and net health benefits were estimated from the U.S. health care sector perspective, assuming a 3% discount rate. Results: CT colonography at 5- and 10-year screening intervals was more costly and less effective than FOBT plus flexible sigmoidoscopy in all three models in both 100% and 50% adherence scenarios. Colonoscopy also was more costly and less effective than FOBT plus flexible sigmoidoscopy, except in the CRC-SPIN model assuming 100% adherence (incremental cost-effectiveness ratio: 26300perlife−yeargained).CTcolonographyat5−and10−yearscreeningintervalsandcolonoscopywerenetbeneficialcomparedwithnoscreeninginallmodelscenarios.The5−yearscreeningintervalwasnetbeneficialoverthe10−yearintervalexceptintheMISCANmodelwhenassuming10026 300 per life-year gained). CT colonography at 5- and 10-year screening intervals and colonoscopy were net beneficial compared with no screening in all model scenarios. The 5-year screening interval was net beneficial over the 10-year interval except in the MISCAN model when assuming 100% adherence and willingness to pay 50 000 per life-year gained. Conclusion: All three models predict CT colonography to be more costly and less effective than non-CT colonographic screening but net beneficial compared with no screening given model assumptions
    • …
    corecore