18 research outputs found
Evaluation of mass screening for cancer : a model-based approach
The main goal in evaluation of screening for cancer is to assist in decision
making about a screening program: Should it be initiated at all? What screening
policies can be recommended: what age groups, what frequency of screening.
Should special attention be paid to high risk groups? If a screening program is
already running, should screening be continued in view of the results? Should the
present policy be changed?
In this chapter, I will describe the complexities involved in answering these
questions. These difficulties lead to the conclusion that models are indispensable
in the interpretation of observed results of screening and in the prediction of
effects and costs of different screening policies
Estimting parameters of a microsimulation model for breast cancer screening using the score function method
In developing decision-making models for the evaluation of medical procedures, the model parameters can be estimated by fitting the model to data observed in trial studies. For complex models that are implemented by discrete event simulation (microsimulation) of individual life histories, the Score Function (SF) method can potentially be an appropriate approach for such estimation exercises. We test this approach for a microsimulation model of screening for cancer that is fitted to data from the HIP randomized trial for early detection of breast cancer. Comparison of the parameter values estimated by
the SF method and the analytical solution shows that method performs well on this simple model. The precision of the estimated parameter values depends (as expected) on the size of the simulation number of life histories), and on the number of parameters estimated. Using analytical representations for parts of the microsimulation model can increase the precision in the estimation of the remaining parameters. Compared to the Nelder and Mead Simplex method which is often used in stochastic simulation because of its ease of implementation, the SF method is clearly more efficient (ratio computer time: precision of estimates). The additional analytical investment needed to implement the
method in an (existing) simulation model may well be worth the effort
The reproductive lifespan of Onchocerca volvulus in West African savanna
Abstract
The epidemiological model ONCHOSIM — a model and computer simulation program for the transmission and control of onchocerciasis — has been used to determine the range of plausible values for the reproductive lifespan of Onchocerca volvulus. Model predictions based on different lifespan quantifications were compared with the results of longitudinal skin-snip surveys undertaken in 4 reference villages during 13 to 14 years of successful vector control in the Onchocerciasis Control Programme in West Africa. Good fits between predicted and observed trends in skin microfilarial loads could be obtained for all villages. It is concluded that the reproductive lifespan of the savanna strain of O. volvulus lies between 9 and 11 years, and that 95% of the parasites reach the end of reproduction before the age of 13 to 14 years
A framework for response surface methodology for simulation optimization
We develop a framework for automated optimization of stochastic simulation models using Response Surface Methodology. The framework is especially intended for simulation models where the calculation of the corresponding stochastic response function is very expensive or time-consuming. Response Surface Methodology is frequently used for the optimization of stochastic simulation models in a non-automated fashion. In scientific applications there is a clear need for a standardized algorithm based on
Response Surface Methodology. In addition, an automated algorithm is less time-consuming, since there is no need to interfere in the optimization
process. In our framework for automated optimization we describe all choices that have to be made in constructing such an algorithm
Adaptive extensions of the Nelder and Mead Simplex Method for optimization of stochastic simulation models
We consider the Nelder and Mead Simplex Method for the optimization of stochastic simulation models. Existing and new adaptive extensions of the Nelder and Mead simplex method designed to improve the accuracy and consistency of the observed best point are studied. We compare
the performance of the extensions on a small microsimulation model, as well as on five test functions. We found that gradually decreasing the noise during an optimization run is the most preferred approach for stochastic objective functions. The amount of computation effort needed for successful optimization is very sensitive to the timing of noise reduction and to the rate of decrease of the noise. Restarting the algorithm during the optimization run, in the sense that the algorithm applies a fresh simplex at certain iterations during an optimization run, has adverse effects in our tests for the microsimulation model and for most test functions
Comparison of response surface methodology and the Nelder and Mead simplex method for optimization in microsimulation models
Microsimulation models are increasingly used in the evaluation of cancer screening. Latent parameters of such models can be estimated by optimization of the goodness-of-fit. We compared the efficiency and accuracy of the Response Surface Methodology and the Nelder and Mead Simplex Method for optimization of microsimulation models. To this end, we tested several automated versions of both methods on a small microsimulation model, as well as on a standard set of test functions. With respect to accuracy, Response Surface Methodology performed better in case of optimization of the microsimulation model, whereas the results for the test functions were rather variable. The Nelder and Mead Simplex Method performed more efficiently than Response Surface Methodology, both for the microsimulation model and the test functions
Disappearance of leprosy from Norway: an exploration of critical factors using an epidemiological modelling approach
BACKGROUND: By the middle of the 19th century, leprosy was a serious
public health problem in Norway. By 1920, new cases only rarely occurred.
This study aims to explain the disappearance of leprosy from Norway.
METHODS: Data from the National Leprosy Registry of Norway and population
censuses were used. The patient data include year of birth, onset of
disease, registration, hospital admission, death, and emigration. The
Norwegian data were analysed using epidemiological models of disease
transmission and control. RESULTS: The time trend in leprosy new case
detection in Norway can be reproduced adequately. The shift in new case
detection towards older ages which occurred over time is accounted for by
assuming that infected individuals may have a very long incubation period.
The decline cannot be explained fully by the Norwegian policy of isolation
of patients: an autonomous decrease in transmission, reflecting
improvements in for instance living conditions, must also be assumed. The
estimated contribution of the isolation policy to the decline in new case
detection very much depends on assumptions made on build-up of
contagiousness during the incubation period and waning of transmission
opportunities due to rapid transmission to close contacts. CONCLUSION: The
impact of isolation on interruption of transmission remains uncertain.
This uncertainty also applies to contemporary leprosy control that mainly
relies on chemotherapy treatment. Further research is needed to establish
the impact of leprosy interventions on transmission