5,403 research outputs found

    Weighted ensemble: Recent mathematical developments

    Full text link
    The weighted ensemble (WE) method, an enhanced sampling approach based on periodically replicating and pruning trajectories in a set of parallel simulations, has grown increasingly popular for computational biochemistry problems, due in part to improved hardware and the availability of modern software. Algorithmic and analytical improvements have also played an important role, and progress has accelerated in recent years. Here, we discuss and elaborate on the WE method from a mathematical perspective, highlighting recent results which have begun to yield greater computational efficiency. Notable among these innovations are variance reduction approaches that optimize trajectory management for systems of arbitrary dimensionality.Comment: 12 pages, 10 figure

    Optimizing Liquidity Usage and Settlement Speed in Payment Systems

    Get PDF
    The operating speed of a payment system depends on the stage of technology of the system's communication and information processing environment. Frequent intraday processing cycles and real-time processing have introduced new means of speeding up the processing and settlement of payments. In a real-time environment banks face new challenges in liquidity management. They need to plan for intraday as well as interday fluctuations in liquidity. By employing various types of hybrid settlement structures, banks may be able to even out intraday fluctuations in liquidity demand. The aim of this study is to develop a framework for analysing fluctuations in liquidity demand and assessing the efficiency of different settlement systems in terms of speed and liquidity needs. In this study we quantify the relationship between liquidity usage and settlement delay in net settlement systems, real-time gross settlement systems and hybrid systems, as well as the combined costs of liquidity and delay in these systems. We analyse ways of reducing costs via optimization features such as netting of queues, offsetting of payments and splitting of payments. We employ a payment system simulator developed at the Bank of Finland, which enables us to evaluate the impact of changes in system parameters and thus to compare the effects of alternative settlement schemes with given payment flows. The data used covers 100 days of actual payments processed in the Finnish BoF-RTGS system. Our major findings relate to risk reduction via real-time settlement, effects of optimization routines in hybrid systems, and the effects of liquidity costs on banks' choice of settlement speed. A system where settlement takes place continuously in real-time and with queuing features is more efficient from the perspective of liquidity and risks than a net settlement system with batch processing. Real-time processing enables a reduction in payment delay and risks without necessarily increasing liquidity needs. Participants will operate under immediate payment/settlement if liquidity costs are low enough relative to delay costs and if the liquidity arrangements are sufficiently flexible. The central bank can therefore support risk reduction and payment speed objectives by providing low cost intraday liquidity as well as more flexible ways for participants to add or withdraw liquidity from the system. Optimizing and gridlock solving features were found to be effective at very low levels of liquidity. The efficiency of the different optimization methods for settlement systems are affected by the actual flow of payments processed. Gains from netting schemes with multiple daily netting cycles were found to be somewhat more limited.payment systems; clearing/settlement; liquidity; efficiency; gridlock

    Stability in the weighted ensemble method

    Get PDF
    2022 Spring.Includes bibliographical references.In molecular dynamics, a quantity of interest is the mean first passage time, or average transition time, for a molecule to transition from a region A to a different region B. Often, significant potential barriers exist between A and B making the transition from A to B a rare event, which is an event that is highly improbable to occur. Correspondingly, the mean first passage time for a molecule to transition from A to B will be immense. So, using direct Markov chain Monte Carlo techniques to effectively estimate the mean first passage time is computationally infeasible due to the protracted simulations required. Instead, the Markov chain modeling the underlying molecular dynamics is simulated to steady-state and the steady-state flux from A into B is estimated. Then through the Hill relation, the mean first passage time is obtained as the reciprocal of the estimated steady-state flux. Estimating the steady-state flux into B is still a rare event but the difficulty has shifted from lengthy simulation times to a substantial variance on the desired estimate. Therefore, an importance sampling or importance splitting technique that emphasizes reaching B and reduces estimator variance must be used. Weighted ensemble is one importance sampling Markov chain Monte Carlo method often used to estimate mean first passage times in molecular dynamics. Broadly, weighted ensemble simulates a collection of Markov chain trajectories that are assigned a weight. Periodically, certain trajectories are copied while others are removed, to encourage a transition from A to B, and the trajectory weights are adjusted accordingly. By time-averaging the weighted average of these Markov chain trajectories, weighted ensemble estimates averages with respect to the Markov chain steady-state distribution. We focus on the use of weighted ensemble for estimating the mean first passage time from A to B, through estimating the steady-state flux from A into B, of a Markov chain where upon reaching B is restarted in A according to an initial, or recycle, distribution. First, we give a mathematical detailing of the weighted ensemble algorithm and provide an unbiased property, ergodic property, and variance formula. The unbiased property gives that the weighted ensemble average of many Markov chain trajectories produces an unbiased estimate for the underlying Markov chain law. Next, the ergodic property states that the weighted ensemble estimator converges almost surely to the desired steady-state average. Lastly, the variance formula provides exact variance of the weighted ensemble estimator. Next, we analyze the impact of the initial or recycle distribution, in A, on bias and variance of the weighted ensemble estimate and compare against adaptive multilevel splitting. Adaptive multilevel splitting is an importance splitting Markov chain Monte Carlo method also used in molecular dynamics for estimating mean first passage times. It has been studied that adaptive multilevel splitting requires a precise importance sampling of the initial, or recycle, distribution to maintain reasonable variance bounds on the adaptive multilevel splitting estimator. We show that the weighted ensemble estimator is less sensitive to the initial distribution since importance sampling the initial distribution frequently does not reduce the variance of the weighted ensemble estimator significantly. For a generic three state Markov chain and one dimensional overdamped Langevin dynamics, we develop specific conditions which must be satisfied for initial distribution importance sampling to provide a significant variance reduction on the weighted ensemble estimator. Finally, for bias, we develop conditions on A, such that the mean first passage time from A to B is stable with respect to changes in the initial distribution. That is, under a perturbation of the initial distribution the resulting change in the mean first passage time is insignificant. The conditions on A are verified with one dimensional overdamped Langevin dynamics and an example is provided. Furthermore, when the mean first passage time is unstable, we develop bounds, for one dimensional overdamped Langevin dynamics, on the change in the mean first passage time and show the tightness of the bounds with numerical examples

    Supply chain management of blood products: a literature review.

    Get PDF
    This paper presents a review of the literature on inventory and supply chain management of blood products. First, we identify different perspectives on approaches to classifying the existing material. Each perspective is presented as a table in which the classification is displayed. The classification choices are exemplified through the citation of key references or by expounding the features of the perspective. The main contribution of this review is to facilitate the tracing of published work in relevant fields of interest, as well as identifying trends and indicating which areas should be subject to future research.OR in health services; Supply chain management; Inventory; Blood products; Literature review;

    Importance Sampling and its Optimality for Stochastic Simulation Models

    Full text link
    We consider the problem of estimating an expected outcome from a stochastic simulation model. Our goal is to develop a theoretical framework on importance sampling for such estimation. By investigating the variance of an importance sampling estimator, we propose a two-stage procedure that involves a regression stage and a sampling stage to construct the final estimator. We introduce a parametric and a nonparametric regression estimator in the first stage and study how the allocation between the two stages affects the performance of the final estimator. We analyze the variance reduction rates and derive oracle properties of both methods. We evaluate the empirical performances of the methods using two numerical examples and a case study on wind turbine reliability evaluation.Comment: 37 pages, 6 figures, 2 tables. Accepted to the Electronic Journal of Statistic

    Strategies for dynamic appointment making by container terminals

    Get PDF
    We consider a container terminal that has to make appointments with barges dynamically, in real-time, and partly automatic. The challenge for the terminal is to make appointments with only limited knowledge about future arriving barges, and in the view of uncertainty and disturbances, such as uncertain arrival and handling times, as well as cancellations and no-shows. We illustrate this problem using an innovative implementation project which is currently running in the Port of Rotterdam. This project aims to align barge rotations and terminal quay schedules by means of a multi-agent system. In this\ud paper, we take the perspective of a single terminal that will participate in this planning system, and focus on the decision making capabilities of its intelligent agent. We focus on the question how the terminal operator can optimize, on an operational level, the utilization of its quay resources, while making reliable appointments with barges, i.e., with a guaranteed departure time. We explore two approaches: (i) an analytical approach based on the value of having certain intervals within the schedule and (ii) an approach based on sources of exibility that are naturally available to the terminal. We use simulation to get insight in the benefits of these approaches. We conclude that a major increase in utilization degree could be achieved only by deploying the sources of exibility, without harming the waiting time of barges too much
    • …
    corecore