28 research outputs found

    Disease surveillance using a hidden Markov model

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Routine surveillance of disease notification data can enable the early detection of localised disease outbreaks. Although hidden Markov models (HMMs) have been recognised as an appropriate method to model disease surveillance data, they have been rarely applied in public health practice. We aimed to develop and evaluate a simple flexible HMM for disease surveillance which is suitable for use with sparse small area count data and requires little baseline data.</p> <p>Methods</p> <p>A Bayesian HMM was designed to monitor routinely collected notifiable disease data that are aggregated by residential postcode. Semi-synthetic data were used to evaluate the algorithm and compare outbreak detection performance with the established Early Aberration Reporting System (EARS) algorithms and a negative binomial cusum.</p> <p>Results</p> <p>Algorithm performance varied according to the desired false alarm rate for surveillance. At false alarm rates around 0.05, the cusum-based algorithms provided the best overall outbreak detection performance, having similar sensitivity to the HMMs and a shorter average time to detection. At false alarm rates around 0.01, the HMM algorithms provided the best overall outbreak detection performance, having higher sensitivity than the cusum-based Methods and a generally shorter time to detection for larger outbreaks. Overall, the 14-day HMM had a significantly greater area under the receiver operator characteristic curve than the EARS C3 and 7-day negative binomial cusum algorithms.</p> <p>Conclusion</p> <p>Our findings suggest that the HMM provides an effective method for the surveillance of sparse small area notifiable disease data at low false alarm rates. Further investigations are required to evaluation algorithm performance across other diseases and surveillance contexts.</p

    Vision at the limits: Absolute threshold, visual function, and outcomes in clinical trials

    No full text
    The study of individual differences in perception at absolute threshold has a rich history, with much of the seminal work being driven by the need to identify those with superior abilities in times of war. Although the popularity of such testing waned in the latter half of the 20th century, interest in measures of visual function at the absolute limit of vision is increasing, partly in response to emerging treatments for retinal diseases, such as gene therapy and cellular therapies, that demand “new” functional measures to assess treatment outcomes. Conventional clinical, or clinical research, testing approaches generally assess rod sensitivity at or near absolute threshold; however, cone sensitivity is typically assayed in the presence of adapting backgrounds. This asymmetry may artifactually favor the detection of rod abnormalities in patients with outer retinal disease. The past decade has seen the commercialization of devices capable of assessing absolute threshold and dark adaptation, including specialized perimeters and instruments capable of assessing “full-field sensitivity threshold” that seek to integrate responses over time and space in those with unstable fixation and/or limited visual fields. Finally, there has also been a recent recapitulation of tests that seek to assess the subject's ability to interpret the visual scene at or near absolute threshold. In addition to assessing vision, such tests simultaneously place cognitive and motor demands on patients in line with the activities of daily living they seek to replicate. We describe the physical and physiological basis of absolute threshold and dark adaptation. Furthermore, we discuss experimental psychophysical and electrophysiological approaches to studying vision at absolute threshold and provide a brief overview of clinical tests of vision at absolute threshold
    corecore