31 research outputs found

    Comparing Statistical Models to Predict Dengue Fever Notifications

    Get PDF
    Dengue fever (DF) is a serious public health problem in many parts of the world, and, in the absence of a vaccine, disease surveillance and mosquito vector eradication are important in controlling the spread of the disease. DF is primarily transmitted by the female Aedes aegypti mosquito. We compared two statistical models that can be used in the surveillance and forecast of notifiable infectious diseases, namely, the Autoregressive Integrated Moving Average (ARIMA) model and the Knorr-Held two-component (K-H) model. The Mean Absolute Percentage Error (MAPE) was used to compare models. We developed the models using used data on DF notifications in Singapore from January 2001 till December 2006 and then validated the models with data from January 2007 till June 2008. The K-H model resulted in a slightly lower MAPE value of 17.21 as compared to the ARIMA model. We conclude that the models' performances are similar, but we found that the K-H model was relatively more difficult to fit in terms of the specification of the prior parameters and the relatively longer time taken to run the models

    Automated droplet measurement (ADM): an enhanced video processing software for rapid droplet measurements

    Get PDF
    This paper identifies and addresses the bottlenecks that hamper the currently available software to perform in situ measurement on droplet-based microfluidic. The new and more universal object-based background extraction operation and automated binary threshold value selection make the processing step of our video processing software (ADM) fully automated. The ADM software, which is based on OpenCV image processing library, is made to perform measurements with high processing speed using efficient code. As the processing speed is higher than the data transfer speed from the video camera to permanent storage of computer, we integrate the camera software development kit (SDK) with ADM. The integration allows simultaneous operations of the video transfer/streaming and the video processing. As a result, the total time for droplet measurement using the new process flow with the integrated program is shortened significantly. ADM is also validated by comparing with both manual analysis and DMV software. ADM will be publicly released as a free tool. The software can also be used on a video file or files without the integration with the camera SDK.Singapore. Ministry of Education (Tier 2 Grant 2011-T2-1-0-36

    Metode Nonlinear Least Square (NLS) Untuk Estimasi Parameter Model Wavelet Radial Basis Neural Network (WRBNN)

    Get PDF
    The use of wavelet radial basis model for forecasting nonlinear time series is introduced in this paper. The model is generated by artificial neural network approximation under restriction that the activation function on the hidden layers is radial basis. The current model is developed from the multiresolution autoregressives (MAR) model, with addition of radial basis function in the hidden layers. The power of model is compared to the other nonlinear model existed before, such as MAR model and Generalized Autoregressives Conditional Heteroscedastic (GARCH) model. The simulation data which be generated from GARCH process is applied to support the aim of research. The sufficiency of model is measured by sum squared of error (SSE). The computation results show that the proposed model has a power as good as GARCH model to carry on the heteroscedastic process

    Confirmation of double-peaked time distribution of mortality among Asian breast cancer patients in a population-based study

    Get PDF
    INTRODUCTION: Double-peaked time distributions of the mortality hazard function have been reported for breast cancer patients from Western populations treated with mastectomy alone. These are thought to reflect accelerated tumour growth at micrometastatic sites mediated by angiogenesis after primary tumour removal as well as tumor dormancy. Similar data are not available for Asian populations. We sought to investigate whether differences exist in the pattern of mortality hazard function between Western breast cancer patients and their Asian counterparts in Singapore, which may suggest underlying differences in tumor biology between the two populations. METHODS: We performed a retrospective cohort study of female unilateral breast cancer patients diagnosed in Singapore between October 1994 and June 1999. Data regarding patient demographics, tumour characteristics and death were available. Overall survival curves were calculated using the Kaplan-Meier method. The hazard rate was calculated as the conditional probability of dying in a time interval, given that the patient was alive at the beginning of the interval. The life table method was used to calculate the yearly hazard rates. RESULTS: In the 2,105 women identified, 956 patients (45.4%) had mastectomy alone. Demographic characteristics were as follows: 86.5% were Chinese, 45.2% were postmenopausal, 38.9% were hormone receptor positive, 54.6% were node negative and 44.1% had high histological grade. We observed a double-peaked mortality hazard pattern, with a first peak in mortality achieving its maximum between years 2 and 4 after mastectomy, and a second large peak in mortality during year 9. Analyses by subgroups revealed a similar pattern regardless of T stage, or node or menopausal status. This pattern was also noted in high-grade tumors but not in those that were well to moderately differentiated. The double-peaked pattern observed in Singaporean women was quantitatively and qualitatively similar to those reported in Western series. CONCLUSION: Our study confirms the existence of a double-peaked process in Asian patients, and it gives further support to the tumour dormancy hypothesis after mastectomy

    The Journal Impact Factor: Too Much of an Impact?

    No full text
    INTRODUCTION: The journal impact factor is often used to judge the scientific quality of individual research articles and individual journals. Despite numerous reviews in the literature criticising such use, in some countries the impact factor has become an outcome measure for grant applications, job applications, promotions and bonuses. The aim of this review is to highlight the major issues involved with using the journal impact factor as a measure of research quality. METHODS: A literature review of articles on journal impact factors, science citation index, and bibliometric methods was undertaken to identify relevant articles. RESULTS: The journal impact factor is a quantitative measure based on the ratio between yearly citations in a particular journal to total citations in that journal in the previous 2 years. Its use as a criterion for measuring the quality of research is biased. The major sources of bias include database problems from the Institute for Scientific Information and research field effects. The journal impact factor, originally designed for purposes other than the individual evaluation of research quality, is a useful tool provided its interpretation is not extrapolated beyond its limits of validity. CONCLUSION: Research quality cannot be measured solely using the journal impact factor. The journal impact factor should be used with caution, and should not be the dominant or only factor determining research quality

    Strategy for randomised clinical trials in rare cancers

    No full text
    Proving that a new treatment is more effective than current treatment can be difficult for rare conditions. Data from small randomised trials could, however, be made more robust by taking other related research into account

    Bayesian designs with frequentist and Bayesian error rate considerations

    No full text
    So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bayesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon’s well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs
    corecore