1,083 research outputs found

    Parallel Weighted Random Sampling

    Get PDF
    Data structures for efficient sampling from a set of weighted items are an important building block of many applications. However, few parallel solutions are known. We close many of these gaps both for shared-memory and distributed-memory machines. We give efficient, fast, and practicable algorithms for sampling single items, k items with/without replacement, permutations, subsets, and reservoirs. We also give improved sequential algorithms for alias table construction and for sampling with replacement. Experiments on shared-memory parallel machines with up to 158 threads show near linear speedups both for construction and queries

    Building Loss Models

    Get PDF
    This paper is intended as a guide to building insurance risk (loss) models. A typical model for insurance risk, the so-called collective risk model, treats the aggregate loss as having a compound distribution with two main components: one characterizing the arrival of claims and another describing the severity (or size) of loss resulting from the occurrence of a claim. In this paper we first present efficient simulation algorithms for several classes of claim arrival processes. Then we review a collection of loss distributions and present methods that can be used to assess the goodness-of-fit of the claim size distribution. The collective risk model is often used in health insurance and in general insurance, whenever the main risk components are the number of insurance claims and the amount of the claims. It can also be used for modeling other non-insurance product risks, such as credit and operational risk.Insurance risk model; Loss distribution; Claim arrival process; Poisson process; Renewal process; Random variable generation; Goodness-of-fit testing

    Building Loss Models

    Get PDF
    This paper is intended as a guide to building insurance risk (loss) models. A typical model for insurance risk, the so-called collective risk model, treats the aggregate loss as having a compound distribution with two main components: one characterizing the arrival of claims and another describing the severity (or size) of loss resulting from the occurrence of a claim. In this paper we first present efficient simulation algorithms for several classes of claim arrival processes. Then we review a collection of loss distributions and present methods that can be used to assess the goodness-of-fit of the claim size distribution. The collective risk model is often used in health insurance and in general insurance, whenever the main risk components are the number of insurance claims and the amount of the claims. It can also be used for modeling other non-insurance product risks, such as credit and operational risk.Insurance risk model; Loss distribution; Claim arrival process; Poisson process; Renewal process; Random variable generation; Goodness-of-fit testing;

    Dynamic Generation of Discrete Random Variates

    Get PDF
    The original publication is available at www.springerlink.comWe present and analyze efficient new algorithms for generating a random variate distributed according to a dynamically changing set of N weights. The base version of each algorithm generates the discrete random variate in O(log N) expected time and updates a weight in O(2log N) expected time in the worst case. We then show how to reduce the update time to O(log N) amortized expected time. We nally show how to apply our techniques to a lookup-table technique in order to obtain expected constant time in the worst case for generation and update. We give parallel algorithms for parallel generation and update having optimal processor-time product. Besides the usual application in computer simulation, our method can be used to perform constant-time prediction in prefetching applications. We also apply our techniques to obtain an eÆcient dynamic algorithm for maintaining an approximate heap of N elements, in which each query is required to return an element whose value is within an multiplicative factor of the maximal element value. For = 1=polylog(N), each query, insertion, or deletion takes O(log log logN) time

    Parallel Weighted Random Sampling

    Get PDF
    Data structures for efficient sampling from a set of weighted items are an important building block of many applications. However, few parallel solutions are known. We close many of these gaps both for shared-memory and distributed-memory machines. We give efficient, fast, and practicable algorithms for sampling single items, k items with/without replacement, permutations, subsets, and reservoirs. We also give improved sequential algorithms for alias table construction and for sampling with replacement. Experiments on shared-memory parallel machines with up to 158 threads show near linear speedups both for construction and queries

    Computer generation of directional data.

    Get PDF
    by Carl Ka-fai Wong.Thesis (M.Phil.)--Chinese University of Hong Kong, 1991.Includes bibliographical references.Chapter Chapter 1 --- Introduction --- p.1Chapter §1.1 --- Directional Data and Computer Simulation --- p.1Chapter §1.2 --- Computer Simulation Techniques --- p.2Chapter §1.3 --- Implementation and Preliminaries --- p.4Chapter Chapter 2 --- Generating Random Points on the N-sphere --- p.6Chapter §2.1 --- Methods --- p.6Chapter §2.2 --- Comparison of Methods --- p.10Chapter Chapter 3 --- Generating Variates from Non-uniform Distributions on the Circle --- p.14Chapter §3.1 --- Introduction --- p.14Chapter §3.2 --- Methods for Circular Distributions --- p.15Chapter Chapter 4 --- Generating Variates from Non-uniform Distributions on the Sphere --- p.28Chapter §4.1 --- Introduction --- p.28Chapter §4.2 --- Methods for Spherical Distributions --- p.29Chapter Chapter 5 --- Generating Variates from Non-uniform Distributions on the N-sphere --- p.56Chapter §5.1 --- Introduction --- p.56Chapter §5.2 --- Methods for Higher Dimensional Spherical Distributions --- p.56Chapter Chapter 6 --- Summary and Discussion --- p.69References --- p.72Appendix 1 --- p.77Appendix 2 --- p.9

    Engineering Shared-Memory Parallel Shuffling to Generate Random Permutations In-Place

    Get PDF
    Shuffling is the process of placing elements into a random order such that any permutation occurs with equal probability. It is an important building block in virtually all scientific areas. We engineer, - to the best of our knowledge - for the first time, a practically fast, parallel shuffling algorithm with O(?n log n) parallel depth that requires only poly-logarithmic auxiliary memory (with high probability). In an empirical evaluation, we compare our implementations with a number of existing solutions on various computer architectures. Our algorithms consistently achieve the highest through-put on all machines. Further, we demonstrate that the runtime of our parallel algorithm is comparable to the time that other algorithms may take to acquire the memory from the operating system to copy the input
    corecore