9 research outputs found

    JAPARA - A Java parallel random number generator library for high-performance computing

    Get PDF
    Copyright © 2004 IEEERandom number generators are one of the most common numerical library functions used in scientific applications. The standard random number generator provided within Java is fine for most purposes, however it does not adequately meet the needs of large-scale scientific applications, such as Monte Carlo simulations. Previous work has addressed some of these problems by extending the standard Random API in Java and providing an implementation that includes a choice of several different generator algorithms. One issue that was not addressed in this work was concurrency. Implementations of the standard Java random number generator use synchronized methods to support the use of the generator across multiple Java threads, however this is a sequential bottleneck for parallel applications. Here we present a proposal for further extending the standard API to support parallel generation of random number streams, which we have implemented in JAPARA, a Java Parallel Random Number Generator Library for high-performance computing.P. D. Coddington, A. J. Newel

    How to Correctly Deal With Pseudorandom Numbers in Manycore Environments - Application to GPU programming with Shoverand

    Get PDF
    International audienceStochastic simulations are often sensitive to the source of randomness that character-izes the statistical quality of their results. Consequently, we need highly reliable Random Number Generators (RNGs) to feed such applications. Recent developments try to shrink the computa-tion time by relying more and more General Purpose Graphics Processing Units (GP-GPUs) to speed-up stochastic simulations. Such devices bring new parallelization possibilities, but they also introduce new programming difficulties. Since RNGs are at the base of any stochastic simulation, they also need to be ported to GP-GPU. There is still a lack of well-designed implementations of quality-proven RNGs on GP-GPU platforms. In this paper, we introduce ShoveRand, a frame-work defining common rules to generate random numbers uniformly on GP-GPU. Our framework is designed to cope with any GPU-enabled development platform and to expose a straightfor-ward interface to users. We also provide an existing RNG implementation with this framework to demonstrate its efficiency in both development and ease of use

    Monte Carlo Simulation With The GATE Software Using Grid Computing

    Get PDF
    DĂ©monstrationInternational audienceMonte Carlo simulations needing many replicates to obtain good statistical results can be easily executed in parallel using the "Multiple Replications In Parallel" approach. However, several precautions have to be taken in the generation of the parallel streams of pseudo-random numbers. In this paper, we present the distribution of Monte Carlo simulations performed with the GATE software using local clusters and grid computing. We obtained very convincing results with this large medical application, thanks to the EGEE Grid (Enabling Grid for E-sciencE), achieving in one week computations that could have taken more than 3 years of processing on a single computer. This work has been achieved thanks to a generic object-oriented toolbox called DistMe which we designed to automate this kind of parallelization for Monte Carlo simulations. This toolbox, written in Java is freely available on SourceForge and helped to ensure a rigorous distribution of pseudo-random number streams. It is based on the use of a documented XML format for random numbers generators statuses

    Distribution of Random Streams for Simulation Practitioners

    Get PDF
    International audienceThere is an increasing interest in the distribution of parallel random number streamsin the high-performance computing community particularly, with the manycore shift. Even ifwe have at our disposal statistically sound random number generators according to the latestand thorough testing libraries, their parallelization can still be a delicate problem. Indeed, aset of recent publications shows it still has to be mastered by the scientific community. Withthe arrival of multi-core and manycore processor architectures on the scientist desktop, modelerswho are non-specialists in parallelizing stochastic simulations need help and advice in distributingrigorously their experimental plans and replications according to the state of the art in pseudo-random numbers parallelization techniques. In this paper, we discuss the different partitioningtechniques currently in use to provide independent streams with their corresponding software. Inaddition to the classical approaches in use to parallelize stochastic simulations on regular processors,this paper also presents recent advances in pseudo-random number generation for general-purposegraphical processing units. The state of the art given in this paper is written for simulationpractitioners

    Intelligent Business Process Optimization for the Service Industry

    Get PDF
    The company\u27s sustainable competitive advantage derives from its capacity to create value for customers and to adapt the operational practices to changing situations. Business processes are the heart of each company. Therefore process excellence has become a key issue. This book introduces a novel approach focusing on the autonomous optimization of business processes by applying sophisticated machine learning techniques such as Relational Reinforcement Learning and Particle Swarm Optimization

    Intelligent Business Process Optimization for the Service Industry

    Get PDF
    The company's sustainable competitive advantage derives from its capacity to create value for customers and to adapt the operational practices to changing situations. Business processes are the heart of each company. Therefore process excellence has become a key issue. This book introduces a novel approach focusing on the autonomous optimization of business processes by applying sophisticated machine learning techniques such as Relational Reinforcement Learning and Particle Swarm Optimization
    corecore