55 research outputs found

    Dynamic Health Policies for Controlling the Spread of Emerging Infections: Influenza as an Example

    Get PDF
    The recent appearance and spread of novel infectious pathogens provide motivation for using models as tools to guide public health decision-making. Here we describe a modeling approach for developing dynamic health policies that allow for adaptive decision-making as new data become available during an epidemic. In contrast to static health policies which have generally been selected by comparing the performance of a limited number of pre-determined sequences of interventions within simulation or mathematical models, dynamic health policies produce “real-time” recommendations for the choice of the best current intervention based on the observable state of the epidemic. Using cumulative real-time data for disease spread coupled with current information about resource availability, these policies provide recommendations for interventions that optimally utilize available resources to preserve the overall health of the population. We illustrate the design and implementation of a dynamic health policy for the control of a novel strain of influenza, where we assume that two types of intervention may be available during the epidemic: (1) vaccines and antiviral drugs, and (2) transmission reducing measures, such as social distancing or mask use, that may be turned “on” or “off” repeatedly during the course of epidemic. In this example, the optimal dynamic health policy maximizes the overall population's health during the epidemic by specifying at any point of time, based on observable conditions, (1) the number of individuals to vaccinate if vaccines are available, and (2) whether the transmission-reducing intervention should be either employed or removed

    Medical decision making for patients with Parkinson disease under Average Cost Criterion

    Get PDF
    Parkinson's disease (PD) is one of the most common disabling neurological disorders and results in substantial burden for patients, their families and the as a whole society in terms of increased health resource use and poor quality of life. For all stages of PD, medication therapy is the preferred medical treatment. The failure of medical regimes to prevent disease progression and to prevent long-term side effects has led to a resurgence of interest in surgical procedures. Partially observable Markov decision models (POMDPs) are a powerful and appropriate technique for decision making. In this paper we applied the model of POMDP's as a supportive tool to clinical decisions for the treatment of patients with Parkinson's disease. The aim of the model was to determine the critical threshold level to perform the surgery in order to minimize the total lifetime costs over a patient's lifetime (where the costs incorporate duration of life, quality of life, and monetary units). Under some reasonable conditions reflecting the practical meaning of the deterioration and based on the various diagnostic observations we find an optimal average cost policy for patients with PD with three deterioration levels

    AMPLE: an anytime planning and execution framework for dynamic and uncertain problems in robotics

    Get PDF
    Acting in robotics is driven by reactive and deliberative reasonings which take place in the competition between execution and planning processes. Properly balancing reactivity and deliberation is still an open question for harmonious execution of deliberative plans in complex robotic applications. We propose a flexible algorithmic framework to allow continuous real-time planning of complex tasks in parallel of their executions. Our framework, named AMPLE, is oriented towards robotic modular architectures in the sense that it turns planning algorithms into services that must be generic, reactive, and valuable. Services are optimized actions that are delivered at precise time points following requests from other modules that include states and dates at which actions are needed. To this end, our framework is divided in two concurrent processes: a planning thread which receives planning requests and delegates action selection to embedded planning softwares in compliance with the queue of internal requests, and an execution thread which orchestrates these planning requests as well as action execution and state monitoring. We show how the behavior of the execution thread can be parametrized to achieve various strategies which can differ, for instance, depending on the distribution of internal planning requests over possible future execution states in anticipation of the uncertain evolution of the system, or over different underlying planners to take several levels into account. We demonstrate the flexibility and the relevance of our framework on various robotic benchmarks and real experiments that involve complex planning problems of different natures which could not be properly tackled by existing dedicated planning approaches which rely on the standard plan-then-execute loop
    corecore