285 research outputs found

    Class library ranlip for multivariate nonuniform random variate generation

    Full text link
    This paper describes generation of nonuniform random variates from Lipschitz-continuous densities using acceptance/rejection, and the class library ranlip which implements this method. It is assumed that the required distribution has Lipschitz-continuous density, which is either given analytically or as a black box. The algorithm builds a piecewise constant upper approximation to the density (the hat function), using a large number of its values and subdivision of the domain into hyperrectangles. The class library ranlip provides very competitive preprocessing and generation times, and yields small rejection constant, which is a measure of efficiency of the generation step. It exhibits good performance for up to five variables, and provides the user with a black box nonuniform random variate generator for a large class of distributions, in particular, multimodal distributions. It will be valuable for researchers who frequently face the task of sampling from unusual distributions, for which specialized random variate generators are not available.<br /

    SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    Full text link
    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.Comment: 15 pages, 4 figures, accepted for publication in Astronomy and Computin

    Optimal Discrete Uniform Generation from Coin Flips, and Applications

    Full text link
    This article introduces an algorithm to draw random discrete uniform variables within a given range of size n from a source of random bits. The algorithm aims to be simple to implement and optimal both with regards to the amount of random bits consumed, and from a computational perspective---allowing for faster and more efficient Monte-Carlo simulations in computational physics and biology. I also provide a detailed analysis of the number of bits that are spent per variate, and offer some extensions and applications, in particular to the optimal random generation of permutations.Comment: first draft, 22 pages, 5 figures, C code implementation of algorith

    Random number generation with multiple streams for sequential and parallel computing

    Get PDF
    International audienceWe provide a review of the state of the art on the design and implementation of random number generators (RNGs) for simulation, on both sequential and parallel computing environments. We focus on the need for multiple streams and substreams of random numbers, explain how they can be constructed and managed, review software libraries that offer them, and illustrate their usefulness via examples. We also review the basic quality criteria for good random number generators and their theoretical and empirical testing

    From phenomenological modelling of anomalous diffusion through continuous-time random walks and fractional calculus to correlation analysis of complex systems

    Get PDF
    This document contains more than one topic, but they are all connected in ei- ther physical analogy, analytic/numerical resemblance or because one is a building block of another. The topics are anomalous diffusion, modelling of stylised facts based on an empirical random walker diffusion model and null-hypothesis tests in time series data-analysis reusing the same diffusion model. Inbetween these topics are interrupted by an introduction of new methods for fast production of random numbers and matrices of certain types. This interruption constitutes the entire chapter on random numbers that is purely algorithmic and was inspired by the need of fast random numbers of special types. The sequence of chapters is chrono- logically meaningful in the sense that fast random numbers are needed in the first topic dealing with continuous-time random walks (CTRWs) and their connection to fractional diffusion. The contents of the last four chapters were indeed produced in this sequence, but with some temporal overlap. While the fast Monte Carlo solution of the time and space fractional diffusion equation is a nice application that sped-up hugely with our new method we were also interested in CTRWs as a model for certain stylised facts. Without knowing economists [80] reinvented what physicists had subconsciously used for decades already. It is the so called stylised fact for which another word can be empirical truth. A simple example: The diffusion equation gives a probability at a certain time to find a certain diffusive particle in some position or indicates concentration of a dye. It is debatable if probability is physical reality. Most importantly, it does not describe the physical system completely. Instead, the equation describes only a certain expectation value of interest, where it does not matter if it is of grains, prices or people which diffuse away. Reality is coded and “averaged” in the diffusion constant. Interpreting a CTRW as an abstract microscopic particle motion model it can solve the time and space fractional diffusion equation. This type of diffusion equation mimics some types of anomalous diffusion, a name usually given to effects that cannot be explained by classic stochastic models. In particular not by the classic diffusion equation. It was recognised only recently, ca. in the mid 1990s, that the random walk model used here is the abstract particle based counterpart for the macroscopic time- and space-fractional diffusion equation, just like the “classic” random walk with regular jumps ±∆x solves the classic diffusion equation. Both equations can be solved in a Monte Carlo fashion with many realisations of walks. Interpreting the CTRW as a time series model it can serve as a possible null- hypothesis scenario in applications with measurements that behave similarly. It may be necessary to simulate many null-hypothesis realisations of the system to give a (probabilistic) answer to what the “outcome” is under the assumption that the particles, stocks, etc. are not correlated. Another topic is (random) correlation matrices. These are partly built on the previously introduced continuous-time random walks and are important in null- hypothesis testing, data analysis and filtering. The main ob jects encountered in dealing with these matrices are eigenvalues and eigenvectors. The latter are car- ried over to the following topic of mode analysis and application in clustering. The presented properties of correlation matrices of correlated measurements seem to be wasted in contemporary methods of clustering with (dis-)similarity measures from time series. Most applications of spectral clustering ignores information and is not able to distinguish between certain cases. The suggested procedure is sup- posed to identify and separate out clusters by using additional information coded in the eigenvectors. In addition, random matrix theory can also serve to analyse microarray data for the extraction of functional genetic groups and it also suggests an error model. Finally, the last topic on synchronisation analysis of electroen- cephalogram (EEG) data resurrects the eigenvalues and eigenvectors as well as the mode analysis, but this time of matrices made of synchronisation coefficients of neurological activity

    Automatic constraint-based synthesis of non-uniform rational B-spline surfaces

    Get PDF
    In this dissertation a technique for the synthesis of sculptured surface models subject to several constraints based on design and manufacturability requirements is presented. A design environment is specified as a collection of polyhedral models which represent components in the vicinity of the surface to be designed, or regions which the surface should avoid. Non-uniform rational B-splines (NURBS) are used for surface representation, and the control point locations are the design variables. For some problems the NURBS surface knots and/or weights are included as additional design variables. The primary functional constraint is a proximity metric which induces the surface to avoid a tolerance envelope around each component. Other functional constraints include: an area/arc-length constraint to counteract the expansion effect of the proximity constraint, orthogonality and parametric flow constraints (to maintain consistent surface topology and improve machinability of the surface), and local constraints on surface derivatives to exploit part symmetry. In addition, constraints based on surface curvatures may be incorporated to enhance machinability and induce the synthesis of developable surfaces;The surface synthesis problem is formulated as an optimization problem. Traditional optimization techniques such as quasi-Newton, Nelder-Mead simplex and conjugate gradient, yield only locally good surface models. Consequently, simulated annealing (SA), a global optimization technique is implemented. SA successfully synthesizes several highly multimodal surface models where the traditional optimization methods failed. Results indicate that this technique has potential applications as a conceptual design tool supporting concurrent product and process development methods
    • 

    corecore