791 research outputs found

    Approximate IPA: Trading Unbiasedness for Simplicity

    Full text link
    When Perturbation Analysis (PA) yields unbiased sensitivity estimators for expected-value performance functions in discrete event dynamic systems, it can be used for performance optimization of those functions. However, when PA is known to be unbiased, the complexity of its estimators often does not scale with the system's size. The purpose of this paper is to suggest an alternative approach to optimization which balances precision with computing efforts by trading off complicated, unbiased PA estimators for simple, biased approximate estimators. Furthermore, we provide guidelines for developing such estimators, that are largely based on the Stochastic Flow Modeling framework. We suggest that if the relative error (or bias) is not too large, then optimization algorithms such as stochastic approximation converge to a (local) minimum just like in the case where no approximation is used. We apply this approach to an example of balancing loss with buffer-cost in a finite-buffer queue, and prove a crucial upper bound on the relative error. This paper presents the initial study of the proposed approach, and we believe that if the idea gains traction then it may lead to a significant expansion of the scope of PA in optimization of discrete event systems.Comment: 8 pages, 8 figure

    Forecasting using neural networks and short-trajectory data

    Full text link
    Forecasting the likelihood, timing, and nature of events is a major goal of modeling stochastic dynamical systems. When the event is rare in comparison with the timescales of simulation and/or measurement needed to resolve the elemental dynamics, accurate forecasting from direct observations becomes challenging. In such cases a more effective approach is to cast statistics of interest as solutions to Feynman-Kac equations (partial differential equations). Here, we develop an approach to solve Feynman-Kac equations by training neural networks on short-trajectory data. Unlike previous approaches, our method avoids assumptions about the underlying model and dynamics. This makes it applicable to treating complex computational models and observational data. We illustrate the advantages of our method using a low-dimensional model that facilitates visualization, and this analysis motivates an adaptive sampling strategy that allows on-the-fly identification of and addition of data to regions important for predicting the statistics of interest. Finally, we demonstrate that we can compute accurate statistics for a 75-dimensional model of sudden stratospheric warming. This system provides a stringent test bed for our method.Comment: 20 pages, 12 figure

    BAYESIAN KERNEL METHODS FOR THE RISK ANALYSIS AND RESILIENCE MODELING OF CRITICAL INFRASTRUCTURE SYSTEMS

    Get PDF
    The protection of critical infrastructures has recently garnered attention with an emphasis on analyzing the risk and improving the resilience of such systems. With the abundance of data, risk managers should be able to better inform preparedness and recovery decision making under uncertainty. It is important, however, to develop and utilize the necessary methodologies that bridge between data and decisions. The goal of this dissertation is to (i) predict the likelihood of risk, (ii) assess the consequences of a disruption, and (iii) inform preparedness and recovery decision making. This research presents a data-driven analysis of the risk and resilience of critical infrastructure systems. First, a new Bayesian kernel model is developed to predict the frequency of failures and a Beta Bayesian kernel model is deployed to model resilience-based importance measures. Bayesian kernel models were developed for Gaussian distributions and later extended to other continuous probability distributions. This research develops a Poisson Bayesian kernel model to accommodate count data. Second, interdependency models are integrated with decision analysis and resilience quantification techniques to assess the multi-industry economic impact of critical infrastructure resilience and inform preparedness and recovery decision making under uncertainty. Examples of critical infrastructure systems are inland waterways, which are critical elements in the nation’s civil infrastructure and the world’s supply chain. They allow for a cost-effective flow of approximately $150 billion worth of commodities annually across industries and geographic locations, which is why they are called “inland marine highways.” Aging components (i.e., locks and dams) combined with adverse weather conditions, affect the reliability and resilience of inland waterways. Frequent disruptions and lengthy recovery times threaten regional commodity flows, and more broadly, multiple industries that rely on those commodities. While policymakers understand the increasing need for inland waterway rehabilitation and preparedness investment, resources are limited and select projects are funded each year to improve only certain components of the network. As a result, a number of research questions arise. What is the impact of infrastructure systems disruptions, and how to predict them? What metrics should be used to identify critical components and determine the system’s resilience? What are the best risk management strategies in terms of preparedness investment and recovery prioritization? A Poisson Bayesian kernel model is developed and deployed to predict the frequency of locks and dams closures. Economic dynamic interdependency models along with stochastic inoperability multiobjective decision trees and resilience metrics are used to assess the broader impact of a disruption resulting in the closure of a port or a link of the river and impacting multiple interdependent industries. Stochastic resilience-based measures are analyzed to determine the critical waterway components, more specifically locks and dams, that contribute to the overall waterway system resilience. A data-driven case study illustrates these methods to describe commodity flows along the various components of the U.S. Mississippi River Navigation System and employs them to motivate preparedness and recovery strategies

    Availability Modeling of Modular Software

    No full text
    The attached file may be somewhat different from the published versionInternational audienceDependability evaluation is a basic component in the assessment of the quality of repairable systems. We develop here a general model specifically designed for software systems that allows the evaluation of different dependability metrics, in particular, of availability measures. The model is of the structural type, based on Markov process theory. In particular, it can be viewed as a attempt to overcome some limitations of the well-known Littlewood's reliability model for modular software. We give both the mathematical results necessary to the transient analysis of this general model and the algorithms that allow to evaluate it efficiently. More specifically, from the parameters describing : the evolution of the execution process when there is no failure, the failure processes together with the way they affect the execution, and the recovery process, we obtain the distribution function of the number of failures on a fixed mission period. In fact, we obtain dependability metrics which are much more informative than the usual ones given in a white-box approach. We briefly discuss the estimation procedures of the parameters of the model. From simple examples, we illustrate the interest in such a structural view and we explain how to take into account reliability growth of part of the software with the transformation approach developed by Laprie and al. Finally, the complete transient analysis of our model allows to discuss in our context the Poissonian approximation reported by Littlewood for its model

    Optimization of Surgery Scheduling in Multiple Operating Rooms with Post Anesthesia Care Unit Capacity Constraints

    Get PDF
    Surgery schedules are subject to disruptions due to duration uncertainty in surgical activities, patient punctuality, surgery cancellation and surgical emergencies. Unavailable recovery resources, such as post-anesthesia care unit (PACU) beds may also cause deviations from the surgical schedule. Such disruptions may result in inefficient utilization of medical resources, suboptimal patient care and patient and staff dissatisfaction. To alleviate these adverse effects, we study three open challenges in the field of surgery scheduling. The case we study is in a surgical suite with multiple operating rooms (ORs) and a shared PACU. The overall objective is to minimize the expected cost incurred from patient waiting time, OR idle time, OR blocking time, OR overtime and PACU overtime.In the first part of this work, we study surgery scheduling with PACU capacity constraints. With surgery sequences predetermined in each OR, a discrete event dynamic system (DEDS) and a DEDS-based stochastic optimization model are devised for the problem. A sample-gradient-based algorithm is proposed for the sample average approximation of our formulation. Numerical experiments suggest that the proposed method identifies near-optimal solutions and outperforms previous methods. It is also shown that considerable cost savings (11.8% on average) are possible in hospitals where PACU beds are a constraint.In the second part, we propose a two-stage solution method for stochastic surgery sequencing and scheduling with PACU capacity constraints. In the first stage, we propose a mixed-integer programming model with a surrogate objective that is much easier to solve than the original problem. The Lagrangian relaxation of the surrogate model can be decomposed by patients into network-structured subproblems which can be efficiently solved by dynamic programming. The first-stage model is solved by the subgradient method to determine the surgery sequence in each OR. Given the surgery sequence, scheduled start times are determined in the second stage using the sample-gradient descent algorithm. Our solution method outperforms benchmark methods that are proposed in the literature by 11% to 43% in numerical experiments. Our sequencing method contributes 45% to 80% of the overall improvement. We also illustrate the improvement on PACU utilization after using our scheduling strategy. In the third part, we propose a proactive and reactive surgery scheduling method for surgery scheduling under surgical disruptions. A surgical schedule considering possible disruptions is constructed prior to the day of surgery, and is then adjusted dynamically in response to disruptions on the day of surgery. The proposed method is based on stochastic optimization and a sample-gradient descent algorithm, which is the first non-metaheuristic approach proposed for this problem. In addition, the to-follow scheduling policy, which is widely used in practice, is considered in this study. This differs from previous surgical scheduling studies which assume no surgery can start before its scheduled start time. The proposed method finds near-optimal solutions and outperforms the scheduling method commonly used in practice

    Diversity, variability and persistence elements for a non-equilibrium theory of eco-evolutionary dynamics

    Get PDF
    Natural ecosystems persist in variable environments by virtue of a suite of traits that span from the individual to the community, and from the ecological to the evolutionary scenarios. How these internal characteristics operate to allow living beings to cope with the uncertainty present in their environments is the subject matter of quantitative theoretical ecology. Under the framework of structural realism, the present dissertation project has advocated for the strategy of mathematical modeling as a strategy of abstraction. The goal is to explore if a range of natural ecosystems display the features of complex systems, and evaluate whether these features provide insights into how they persist in their current environments, and how might they cope with changing environments in the future. A suite of inverse, linear and non-linear dynamical mathematical models, including non-equilibrium catastrophe models, and structured demographic approaches is applied to five case studies of natural systems fluctuating in the long-term in diverse scenarios: phytoplankton in the global ocean, a mixotrophic plankton food web in a marine coastal environment, a wintering waterfowl community in a major Mediterranean biodiversity hot-spot, a breeding colony of a keystone avian scavenger in a mountainous environment and the shorebird community inhabiting the coast of UK. In all case studies, there is strong evidence that ecosystems are able to closely track their common environment through several strategies. For example, in global phytoplankton communities, a latitudinal gradient in the positive impact of functional diversity on community stability counteracts the increasing environmental variability with latitude. Mixotrophy, by linking several feeding strategies in a food web, internally drives community dynamics to the edge of instability while maximizing network complexity. In contrast, an externally generated major perturbation, operating through planetary climatic disruptions, induce an abrupt regime shift between alternative stable states in the wintering waterfowl community. Overall, the natural systems studied are shown to posses features of complex systems: connectivity, autonomy, emergence, non-equilibrium, non-linearity, self-organization and coevolution. In rapidly changing environments, these features are hypothesized to allow natural system to robustly respond to stress and disturbances to a large extent. At the same time, future scenarios will be probably characterized by conditions never experienced before by the studied systems. How will they respond to them, is an open question. Based on the results of this dissertation, future research directions in theoretical quantitative ecology will likely benefit from non-autonomous dynamical system approaches, where model parameters are a function of time, and from the deeper exploration of global attractors and the non-equilibriumness of dynamical systems

    An event-driven approach to control and optimization of multi-agent systems

    Get PDF
    This dissertation studies the application of several event-driven control schemes in multi-agent systems. First, a new cooperative receding horizon (CRH) controller is designed and applied to a class of maximum reward collection problems. Target rewards are time-variant with finite deadlines and the environment contains uncertainties. The new methodology adapts an event-driven approach by optimizing the control for a planning horizon and updating it for a shorter action horizon. The proposed CRH controller addresses several issues including potential instabilities and oscillations. It also improves the estimated reward-to-go which enhances the overall performance of the controller. The other major contribution is that the originally infinite-dimensional feasible control set is reduced to a finite set at each time step which improves the computational cost of the controller. Second, a new event-driven methodology is studied for trajectory planning in multi-agent systems. A rigorous optimal control solution is employed using numerical solutions which turn out to be computationally infeasible in real time applications. The problem is then parameterized using several families of parametric trajectories. The solution to the parametric optimization relies on an unbiased estimate of the objective function's gradient obtained by the "Infinitesimal Perturbation Analysis" method. The premise of event-driven methods is that the events involved are observable so as to "excite" the underlying event-driven controller. However, it is not always obvious that these events actually take place under every feasible control in which case the controller may be useless. This issue of event excitation, which arises specially in multi-agent systems with a finite number of targets, is studied and addressed by introducing a novel performance measure which generates a potential field over the mission space. The effect of the new performance metric is demonstrated through simulation and analytical results
    • …
    corecore