39 research outputs found
Reactive strategies for containing developing outbreaks of pandemic influenza
Abstract
Background
In 2009 and the early part of 2010, the northern hemisphere had to cope with the first waves of the new influenza A (H1N1) pandemic. Despite high-profile vaccination campaigns in many countries, delays in administration of vaccination programs were common, and high vaccination coverage levels were not achieved. This experience suggests the need to explore the epidemiological and economic effectiveness of additional, reactive strategies for combating pandemic influenza.
Methods
We use a stochastic model of pandemic influenza to investigate realistic strategies that can be used in reaction to developing outbreaks. The model is calibrated to documented illness attack rates and basic reproductive number (R0) estimates, and constructed to represent a typical mid-sized North American city.
Results
Our model predicts an average illness attack rate of 34.1% in the absence of intervention, with total costs associated with morbidity and mortality of US37 million, respectively, when low-coverage reactive vaccination and limited antiviral use are combined with practical, minimally disruptive social distancing strategies, including short-term, as-needed closure of individual schools, even when vaccine supply-chain-related delays occur. Results improve with increasing vaccination coverage and higher vaccine efficacy.
Conclusions
Such combination strategies can be substantially more effective than vaccination alone from epidemiological and economic standpoints, and warrant strong consideration by public health authorities when reacting to future outbreaks of pandemic influenza
Optimization of the Transient and Steady-State Behavior of Discrete Event Systems
We present a general framework for applying simulation to optimize the behavior of discrete event systems. Our approach involves modeling the discrete event system under study as a general state space Markov chain whose distribution depends on the decision parameters. We then show how simulation and the likelihood ratio method can be used to evaluate the performance measure of interest and its gradient, and we present conditions that guarantee that the Robbins-Monro stochastic approximation algorithm will converge almost surely to the optimal values of the decision parameters. Both transient and steady-state performance measures are considered. For steady-state performance measures, we consider both the case when the Markov chain of interest is regenerative in the standard sense, as well as the case when this Markov chain is Harris recurrent, and thereby regenerative in a wider sense.stochastic optimization, simulation, stochastic approximation, gradient estimation, Robbins-Monro algorithm, regenerative method, likelihood ratio method, Markov chains, Harris recurrence
A Method for Discrete Stochastic Optimization
This paper addresses the problem of optimizing a function over a finite or countably infinite set of alternatives, in situations where this objective function cannot be evaluated exactly, but has to be estimated or measured. A special focus is on situations where simulation is used to evaluate the objective function. We present two versions of a new iterative method for solving such discrete stochastic optimization problems. In each iteration of the proposed method, a neighbor of the "current" alternative is selected, and estimates of the objective function evaluated at the current and neighboring alternatives are compared. The alternative that has a better observed function value becomes the next current alternative. We show how one version of the proposed method can be used to solve discrete optimization problems where the objective function is evaluated using transient or steady-state simulation, and we show how the other version can be applied to solve a special class of discrete stochastic optimization problems and present some numerical results. A major strength of the proposed method is that it spends most of the computational effort at local minimizers of the objective function. In fact, we show that for both versions of the proposed method, the alternative that has been visited most often in the first m iterations converges almost surely to a local optimizer of the objective function as m goes to infinity.simulation, random walks, nonhomogeneous Markov Chains
Throughput Maximization for Tandem Lines with Two Stations and Flexible Servers
For a Markovian queueing network with two stations in tandem, finite intermediate buffer, and M flexible servers, we study how the servers should be assigned dynamically to stations in order to obtain optimal long-run average throughput. We assume that each server can work on only one job at a time, that several servers can work together on a single job, and that the travel times between stations are negligible. Under these assumptions, we completely characterize the optimal policy for systems with three servers. We also provide a conjecture for the structure of the optimal policy for systems with four or more servers that is supported by extensive numerical evidence. Finally, we develop heuristic server assignment policies for systems with three or more servers that are easy to implement, robust with respect to the server capabilities, and generally appear to yield near-optimal long-run average throughput
A Simulated Annealing Algorithm with Constant Temperature for Discrete Stochastic Optimization
We present a modification of the simulated annealing algorithm designed for solving discrete stochastic optimization problems. Like the original simulated annealing algorithm, our method has the hill climbing feature, so it can find global optimal solutions to discrete stochastic optimization problems with many local solutions. However, our method differs from the original simulated annealing algorithm in that it uses a constant (rather than decreasing) temperature. We consider two approaches for estimating the optimal solution. The first approach uses the number of visits the algorithm makes to the different states (divided by a normalizer) to estimate the optimal solution. The second approach uses the state that has the best average estimated objective function value as estimate of the optimal solution. We show that both variants of our method are guaranteed to converge almost surely to the set of global optimal solutions, and discuss how our work applies in the discrete deterministic optimization setting. We also show how both variants can be applied for solving discrete optimization problems when the objective function values are estimated using either transient or steady-state simulation. Finally, we include some encouraging numerical results documenting the behavior of the two variants of our algorithm when applied for solving two versions of a particular discrete stochastic optimization problem, and compare their performance with that of other variants of the simulated annealing algorithm designed for solving discrete stochastic optimization problems.global optimization, discrete parameters, simulated annealing, simulation optimization