759 research outputs found

    Monte Carlo methods for the estimation of value-at-risk and related risk measures

    Get PDF
    Nested Monte Carlo is a computationally expensive exercise. The main contributions we present in this thesis are the formulation of efficient algorithms to perform nested Monte Carlo for the estimation of Value-at-Risk and Expected-Tail-Loss. The algorithms are designed to take advantage of multiprocessing computer architecture by performing computational tasks in parallel. Through numerical experiments we show that our algorithms can improve efficiency in the sense of reducing mean-squared error

    A new class of highly efficient exact stochastic simulation algorithms for chemical reaction networks

    Full text link
    We introduce an alternative formulation of the exact stochastic simulation algorithm (SSA) for sampling trajectories of the chemical master equation for a well-stirred system of coupled chemical reactions. Our formulation is based on factored-out, partial reaction propensities. This novel exact SSA, called the partial propensity direct method (PDM), is highly efficient and has a computational cost that scales at most linearly with the number of chemical species, irrespective of the degree of coupling of the reaction network. In addition, we propose a sorting variant, SPDM, which is especially efficient for multiscale reaction networks.Comment: 23 pages, 3 figures, 4 tables; accepted by J. Chem. Phy

    A Combined Stochastic and Greedy Hybrid Estimation Capability for Concurrent Hybrid Models with Autonomous Mode Transitions

    Get PDF
    Robotic and embedded systems have become increasingly pervasive in applicationsranging from space probes and life support systems to robot assistants. In order to act robustly in the physical world, robotic systems must be able to detect changes in operational mode, such as faults, whose symptoms manifest themselves only in the continuous state. In such systems, the state is observed indirectly, and must therefore be estimated in a robust, memory-efficient manner from noisy observations.Probabilistic hybrid discrete/continuous models, such as Concurrent Probabilistic Hybrid Automata (CPHA) are convenient modeling tools for such systems. In CPHA, the hidden state is represented with discrete and continuous state variables that evolve probabilistically. In this paper, we present a novel method for estimating the hybrid state of CPHA that achieves robustness by balancing greedy and stochastic search. The key insight is that stochastic and greedy search methods, taken together, are often particularly effective in practice.To accomplish this, we first develop an efficient stochastic sampling approach for CPHA based on Rao-Blackwellised Particle Filtering. We then propose a strategy for mixing stochastic and greedy search. The resulting method is able to handle three particularly challenging aspects of real-world systems, namely that they 1) exhibit autonomous mode transitions, 2) consist of a large collection of concurrently operating components, and 3) are non-linear. Autonomous mode transitions, that is, discrete transitions that depend on thecontinuous state, are particularly challenging to address, since they couple the discrete and continuous state evolution tightly. In this paper we extend the class of autonomous mode transitions that can be handled to arbitrary piecewise polynomial transition distributions.We perform an empirical comparison of the greedy and stochastic approaches to hybrid estimation, and then demonstrate the robustness of the mixed method incorporated with our HME (Hybrid Mode Estimation) capability. We show that this robustness comes at only a small performance penalty

    Adaptive global optimization algorithms

    Get PDF
    Global optimization is concerned with finding the minimum value of a function where many local minima may exist. The development of a global optimization algorithm may involve using information about the target function (e.g., differentiability) and functions based on statistical models to better the worst case time complexity and expected error of similar deterministic algorithms. Recent algorithms are investigated, new ones proposed and their performance is analyzed. Minimum, maximum and average case error bounds for the algorithms presented are derived. Software architecture implemented with MATLAB and Java is presented and experimental results for the algorithms are displayed. The graphical capabilities and function-rich MATLAB environment are combined with the object oriented features of Java, hosted on the computer system described in this paper, to provide a fast, powerful test environment to provide experimental results. In order to do this, matlabcontrol, a third party set of procedures that allows a Java program to call MATLAB functions to access a function such as voronoi() or to provide graphical results, is used. Additionally, the Java implementation can be called from, and return values to, the MATLAB environment. The data can then be used as input to MATLAB\u27s graphing or other functions. The software test environment provides algorithm performance information such as whether more iterations or replications of a proposed algorithm would be expected to provide a better result for an algorithm. It is anticipated that the functionality provided by the framework would be used for initial development and analysis and subsequently removed and replaced with optimized (in the computer efficiency sense) functions for deployment

    Replicative Use of an External Model in Simulation Variance Reduction

    Get PDF
    The use of control variates is a well-known variance reduction technique for discrete event simulation experiments. Currently, internal control variates are used almost exclusively by practitioners and researchers when using control variates. The primary objective of this study is to explore the variance reduction achieved by the replicative use of an external, analytical model to generate control variates. Performance for the analytical control variates is compared to the performance of typical internal and external control variates for both an open and a closed queueing network. Performance measures used are confidence interval width reduction, realized coverage, and estimated Mean Square Error. Results of this study indicate analytical control variates achieve comparable confidence interval width reduction with internal and external control variates. However, the analytical control variates exhibit greater levels of estimated bias. Possible causes and remedies for the observed bias are discussed and areas for future research and use of analytical control variates conclude the study

    Approximate Data Structures with Applications

    Get PDF
    In this paper we introduce the notion of approximate data structures, in which a small amount of error is tolerated in the output. Approximate data structures trade error of approximation for faster operation, leading to theoretical and practical speedups for a wide variety of algorithms. We give approximate variants of the van Emde Boas data structure, which support the same dynamic operations as the standard van Emde Boas data structure [28, 201, except that answers to queries are approximate. The variants support all operations in constant time provided the error of approximation is l/polylog(n), and in O(loglog n) time provided the error is l/polynomial(n), for n elements in the data structure. We consider the tolerance of prototypical algorithms to approximate data structures. We study in particular Prim’s minimumspanning tree algorithm, Dijkstra’s single-source shortest paths algorithm, and an on-line variant of Graham’s convex hull algorithm. To obtain output which approximates the desired output with the error of approximation tending to zero, Prim’s algorithm requires only linear time, Dijkstra’s algorithm requires O(mloglogn) time, and the on-line variant of Graham’s algorithm requires constant amortized time per operation

    Parallel Weighted Random Sampling

    Get PDF
    Data structures for efficient sampling from a set of weighted items are an important building block of many applications. However, few parallel solutions are known. We close many of these gaps both for shared-memory and distributed-memory machines. We give efficient, fast, and practicable algorithms for sampling single items, k items with/without replacement, permutations, subsets, and reservoirs. We also give improved sequential algorithms for alias table construction and for sampling with replacement. Experiments on shared-memory parallel machines with up to 158 threads show near linear speedups both for construction and queries
    • …
    corecore