7 research outputs found

    Cutset Sampling for Bayesian Networks

    Full text link
    The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the networks graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks

    Dynamic demand fulfillment in spare parts networks with multiple customer classes

    Get PDF
    We study real-time demand fulfillment for networks consisting of multiple local warehouses, where spare parts of expensive technical systems are kept on stock for customers with di??erent service contracts. Each service contract specifies a maximum response time in case of a failure and hourly penalty costs for contract violations. Part requests can be fulfilled from multiple local warehouses via a regular delivery, or from an external source with ample capacity via an expensive emergency delivery. The objective is to minimize delivery cost and penalty cost by smartly allocating items from the available network stock to arriving part requests. We propose a dynamic allocation rule that belongs to the class of one-step lookahead policies. To approximate the optimal relative cost, we develop an iterative calculation scheme that estimates the expected total cost over an infinite time horizon, assuming that future demands are fulfilled according to a simple static allocation rule. In a series of numerical experiments, we compare our dynamic allocation rule with the optimal allocation rule, and a simple but widely used static allocation rule. We show that the dynamic allocation rule has a small optimality gap and that it achieves an average cost reduction of 7.9% compared to the static allocation rule on a large test bed containing problem instances of real-life size

    A User's Guide to the Brave New World of Designing Simulation Experiments

    Get PDF
    Many simulation practitioners can get more from their analyses by using the statistical theory on design of experiments (DOE) developed specifically for exploring computer models.In this paper, we discuss a toolkit of designs for simulationists with limited DOE expertise who want to select a design and an appropriate analysis for their computational experiments.Furthermore, we provide a research agenda listing problems in the design of simulation experiments -as opposed to real world experiments- that require more investigation.We consider three types of practical problems: (1) developing a basic understanding of a particular simulation model or system; (2) finding robust decisions or policies; and (3) comparing the merits of various decisions or policies.Our discussion emphasizes aspects that are typical for simulation, such as sequential data collection.Because the same problem type may be addressed through different design types, we discuss quality attributes of designs.Furthermore, the selection of the design type depends on the metamodel (response surface) that the analysts tentatively assume; for example, more complicated metamodels require more simulation runs.For the validation of the metamodel estimated from a specific design, we present several procedures.

    A restless bandit approach for capacitated condition based maintenance scheduling

    Get PDF
    peer reviewedThis paper considers the maintenance scheduling problem of multiple non-identical machines deteriorating over time. The deterioration gradually decreases a machine’s performance, which results in revenue losses due to lower output quality. The maintenance cost is dependent on the degradation state, and the number of maintenance activities that can be carried out simultaneously is restricted by the number of maintenance workers. Our main goal is to propose a heuristic with low complexity that consistently produces solutions close to the optimal strategy for problems of real size. We cast the problem as a restless bandit problem and propose an index based heuristic (Whittle’s index policy) which can be computed efficiently. We also provide a lower bound that can be computed by linear programming. We numerically compare the performance of the index heuristic with alternative policies. In addition to achieving superior performance over failure-based and threshold policies, Whittle’s policy numerically converges to our lower bound when the number of machines is moderately high and/or maintenance workload is high

    MODEL-BASED APPROACH TO THE UTILIZATION OF HETEROGENEOUS NON-OVERLAPPING DATA IN THE OPTIMIZATION OF COMPLEX AIRPORT SYSTEMS

    Get PDF
    Simulation and optimization have been widely used in air transportation, particularly when it comes to determining how flight operations might evolve. However, with regards to passengers and the services provided to them, this is not the case in large part because the data required for such analysis is harder to collect, requiring the timely use of surveys and significant human labor. The ubiquity of always--connected smart devices and the rise of inexpensive smart devices has made it possible to continuously collect passenger information for passenger-centric solutions such as the automatic mitigation of passenger traffic. Using these devices, it is possible to capture dwell times, transit times, and delays directly from the customers. The data; however, is often sparse and heterogeneous, both spatially and temporally. For instance, the observations come at different times and have different levels of accuracy depending on the location, making it challenging to develop a precise network model of airport operations. The objective of this research is to provide online methods to sequentially correct the estimates of the dynamics of a system of queues despite noisy, quickly changing, and incomplete information. First, a sequential change point detection scheme based on a generalized likelihood ratio test is developed to detect a change in the dynamics of a single queue by using a combination of waiting times, time spent in queue, and queue-length measurements. A trade-off is made between the accuracy of the tests, the speed of the tests, the costs of the tests, and the value of utilizing the observations jointly or separately. The contribution is a robust detection methodology that quickly detects a change in queue dynamics from correlated measurements. In the second part of the work, a model-based estimation tool is developed to update the service rate distribution for a single queue from sparse and noisy airport operations data. Model Reference Adaptive Sampling is used in-the-loop to update a generalized gamma distribution for the service rates within a simulation of the queue at an airport’s immigration center. The contribution is a model predictive tool to optimize the service rates based on waiting time information. The two frameworks allow for the analysis of heterogeneous passenger data sources to enable the tactical mitigation of airport passenger traffic delays.Ph.D
    corecore