243 research outputs found
Confidence intervals for ranks of age-adjusted rates across states or counties
Health indices provide information to the general public on the health condition of the community. They can also be used to inform the government’s policy making, to evaluate the effect of a current policy or healthcare program, or for program planning and priority setting. It is a common practice that the health indices across different geographic units are ranked and the ranks are reported as fixed values. We argue that the ranks should be viewed as random and hence should be accompanied by an indication of precision (i.e., the confidence intervals). A technical difficulty in doing so is how to account for the dependence among the ranks in the construction of confidence intervals. In this paper, we propose a novel Monte Carlo method for constructing the individual and simultaneous confidence intervals of ranks for age-adjusted rates. The proposed method uses as input age-specific counts (of cases of disease or deaths) and their associated populations. We have further extended it to the case in which only the age-adjusted rates and confidence intervals are available. Finally, we demonstrate the proposed method to analyze US age-adjusted cancer incidence rates and mortality rates for cancer and other diseases by states and counties within a state using a website that will be publicly available. The results show that for rare or relatively rare disease (especially at the county level), ranks are essentially meaningless because of their large variability, while for more common disease in larger geographic units, ranks can be effectively utilized
The Joinpoint-Jump and Joinpoint-Comparability Ratio Model for Trend Analysis with Applications to Coding Changes in Health Statistics
Analysis of trends in health data collected over time can be affected by instantaneous changes in coding that cause sudden increases/decreases, or “jumps,” in data. Despite these sudden changes, the underlying continuous trends can present valuable information related to the changing risk profile of the population, the introduction of screening, new diagnostic technologies, or other causes. The joinpoint model is a well-established methodology for modeling trends over time using connected linear segments, usually on a logarithmic scale. Joinpoint models that ignore data jumps due to coding changes may produce biased estimates of trends. In this article, we introduce methods to incorporate a sudden discontinuous jump in an otherwise continuous joinpoint model. The size of the jump is either estimated directly (the Joinpoint-Jump model) or estimated using supplementary data (the Joinpoint-Comparability Ratio model). Examples using ICD-9/ICD-10 cause of death coding changes, and coding changes in the staging of cancer illustrate the use of these models
The Joinpoint-Jump and Joinpoint-Comparability Ratio Model for Trend Analysis with Applications to Coding Changes in Health Statistics
Analysis of trends in health data collected over time can be affected by instantaneous changes in coding that cause sudden increases/decreases, or “jumps,” in data. Despite these sudden changes, the underlying continuous trends can present valuable information related to the changing risk profile of the population, the introduction of screening, new diagnostic technologies, or other causes. The joinpoint model is a well-established methodology for modeling trends over time using connected linear segments, usually on a logarithmic scale. Joinpoint models that ignore data jumps due to coding changes may produce biased estimates of trends. In this article, we introduce methods to incorporate a sudden discontinuous jump in an otherwise continuous joinpoint model. The size of the jump is either estimated directly (the Joinpoint-Jump model) or estimated using supplementary data (the Joinpoint-Comparability Ratio model). Examples using ICD-9/ICD-10 cause of death coding changes, and coding changes in the staging of cancer illustrate the use of these models
Unsupervised real-world knowledge extraction via disentangled variational autoencoders for photon diagnostics
We present real-world data processing on measured electron time-of-flight
data via neural networks. Specifically, the use of disentangled variational
autoencoders on data from a diagnostic instrument for online wavelength
monitoring at the free electron laser FLASH in Hamburg. Without a-priori
knowledge the network is able to find representations of single-shot FEL
spectra, which have a low signal-to-noise ratio. This reveals, in a directly
human-interpretable way, crucial information about the photon properties. The
central photon energy and the intensity as well as very detector-specific
features are identified. The network is also capable of data cleaning, i.e.
denoising, as well as the removal of artefacts. In the reconstruction, this
allows for identification of signatures with very low intensity which are
hardly recognisable in the raw data. In this particular case, the network
enhances the quality of the diagnostic analysis at FLASH. However, this
unsupervised method also has the potential to improve the analysis of other
similar types of spectroscopy data
Postoperative Complications in the Ahmed Baerveldt Comparison Study During Five Years of Follow-up
To compare the late complications in the Ahmed Baerveldt Comparison Study during 5 years of follow-up
Recommended from our members
Using Gini coefficient to determining optimal cluster reporting sizes for spatial scan statistics
Background: Spatial and space–time scan statistics are widely used in disease surveillance to identify geographical areas of elevated disease risk and for the early detection of disease outbreaks. With a scan statistic, a scanning window of variable location and size moves across the map to evaluate thousands of overlapping windows as potential clusters, adjusting for the multiple testing. Almost always, the method will find many very similar overlapping clusters, and it is not useful to report all of them. This paper proposes to use the Gini coefficient to help select which of the many overlapping clusters to report. Methods: The Gini coefficient provides a quick and intuitive way to evaluate the degree of the heterogeneity of the collection of clusters, which is useful to explain how well the cluster collection reveal the underlying true cluster patterns. Using simulation studies and real cancer mortality data, it is compared with the traditional approach for reporting non-overlapping clusters. Results: The Gini coefficient can identify a more refined collection of non-overlapping clusters to report. For example, it is able to determine when it makes more sense to report a collection of smaller non-overlapping clusters versus a single large cluster containing all of them. It also fulfils a set of desirable theoretical properties, such as being invariant under a uniform multiplication of the population numbers by the same constant. Conclusions: The Gini coefficient can be used to determine which set of non-overlapping clusters to report. It has been implemented in the free SaTScan™ software version 9.3 (www.satscan.org)
Three-year Treatment Outcomes in the Ahmed Baerveldt Comparison Study
To compare three year outcomes and complications of the Ahmed FP7 Glaucoma Valve (AGV) and Baerveldt 101–350 Glaucoma Implant (BGI) for the treatment of refractory glaucoma
Tele-branding in TVIII: the network as brand and the programme as brand
In the era of TVIII, characterized by deregulation, multimedia conglomeration, expansion and increased competition, branding has emerged as a central industrial practice. Focusing on the case of HBO, a particularly successful brand in TVIII, this article argues that branding can be understood not simply as a feature of television networks, but also as a characteristic of television programmes. It begins by examining how the network as brand is constructed and conveyed to the consumer through the use of logos, slogans and programmes. The role of programmes in the construction of brand identity is then complicated by examining the sale of programmes abroad, where programmes can be seen to contribute to the brand identity of more than one network. The article then goes on to examine programme merchandising, an increasingly central strategy in TVIII. Through an analysis of different merchandising strategies the article argues that programmes have come to act as brands in their own right, and demonstrates that the academic study of branding not only reveals the development of new industrial practices, but also offers a way of understanding the television programme and its consumption by viewers in a period when the texts of television are increasingly extended across a range of media platforms
Comparative economic evaluation of data from the ACRIN national CT colonography trial with three cancer intervention and surveillance modeling network microsimulations
Purpose: To estimate the cost-effectiveness of computed tomographic (CT) colonography for colorectal cancer (CRC) screening in average-risk asymptomatic subjects in the United States aged 50 years. Materials and Methods: Enrollees in the American College of Radiology Imaging Network National CT Colonography Trial provided informed consent, and approval was obtained from the institutional review board at each site. CT colonography performance estimates from the trial were incorporated into three Cancer Intervention and Surveillance Modeling Network CRC microsimulations. Simulated survival and lifetime costs for screening 50-year-old subjects in the United States with CT colonography every 5 or 10 years were compared with those for guideline-concordant screening with colonoscopy, flexible sigmoidoscopy plus either sensitive unrehydrated fecal occult blood testing (FOBT) or fecal immunochemical testing (FIT), and no screening. Perfect and reduced screening adherence scenarios were considered. Incremental cost-effectiveness and net health benefits were estimated from the U.S. health care sector perspective, assuming a 3% discount rate. Results: CT colonography at 5- and 10-year screening intervals was more costly and less effective than FOBT plus flexible sigmoidoscopy in all three models in both 100% and 50% adherence scenarios. Colonoscopy also was more costly and less effective than FOBT plus flexible sigmoidoscopy, except in the CRC-SPIN model assuming 100% adherence (incremental cost-effectiveness ratio: 50 000 per life-year gained. Conclusion: All three models predict CT colonography to be more costly and less effective than non-CT colonographic screening but net beneficial compared with no screening given model assumptions
Five-Year Treatment Outcomes in the Ahmed Baerveldt Comparison Study
To compare the five year outcomes of the Ahmed FP7 Glaucoma Valve (AGV) and the Baerveldt 101-350 Glaucoma Implant (BGI) for the treatment of refractory glaucoma
- …