4,179 research outputs found

    Algorithms for randomness in the behavioral sciences: A tutorial

    Get PDF
    Simulations and experiments frequently demand the generation of random numbera that have specific distributions. This article describes which distributions should be used for the most cammon problems and gives algorithms to generate the numbers.It is also shown that a commonly used permutation algorithm (Nilsson, 1978) is deficient

    System And Method For Communication Using Noise

    Get PDF
    Disclosed is a noise communication system and method. The noise communication system comprises a transmitter and a receiver. The transmitter indexes through at least two noise records which comprise a series of randomly generated samples, the noise records being divided into noise segments, to maintain a current noise segment for each noise record. The transmitter modulates a predefined base signal using the segments of the noise records to represent the symbols of the base signal. In modulating the predefined base signal, the transmitter replaces the respective symbols of the base signal with the current noise segments from the noise records, thereby generating a noise signal in which the symbols can not be discerned. The noise signal is transmitted across a communications channel to the receiver which demodulates the noise signal into the base signal. The demodulation employs a number of correlators that equals the number of noise records employed at the transmitter. The receiver includes logic to index through the noise records in a similar manner to the transmitter to produce the current noise segments. Each correlator performs a multiplication function between a current noise segment from the noise record assigned to the correlator and the received segments of the noise signal which reveals a peak output when the segments match. The base signal is recreated by incorporating the symbol indicated by the noise record for which a match was experienced.Georgia Tech Research Corporatio

    Mixing multi-core CPUs and GPUs for scientific simulation software

    Get PDF
    Recent technological and economic developments have led to widespread availability of multi-core CPUs and specialist accelerator processors such as graphical processing units (GPUs). The accelerated computational performance possible from these devices can be very high for some applications paradigms. Software languages and systems such as NVIDIA's CUDA and Khronos consortium's open compute language (OpenCL) support a number of individual parallel application programming paradigms. To scale up the performance of some complex systems simulations, a hybrid of multi-core CPUs for coarse-grained parallelism and very many core GPUs for data parallelism is necessary. We describe our use of hybrid applica- tions using threading approaches and multi-core CPUs to control independent GPU devices. We present speed-up data and discuss multi-threading software issues for the applications level programmer and o er some suggested areas for language development and integration between coarse-grained and ne-grained multi-thread systems. We discuss results from three common simulation algorithmic areas including: partial di erential equations; graph cluster metric calculations and random number generation. We report on programming experiences and selected performance for these algorithms on: single and multiple GPUs; multi-core CPUs; a CellBE; and using OpenCL. We discuss programmer usability issues and the outlook and trends in multi-core programming for scienti c applications developers

    Parallel Computation in Econometrics: A Simplified Approach

    Get PDF
    Parallel computation has a long history in econometric computing, but is not at all wide spread. We believe that a major impediment is the labour cost of coding for parallel architectures. Moreover, programs for specific hardware often become obsolete quite quickly. Our approach is to take a popular matrix programming language (Ox), and implement a message-passing interface using MPI. Next, object-oriented programming allows us to hide the specific parallelization code, so that a program does not need to be rewritten when it is ported from the desktop to a distributed network of computers. Our focus is on so-called embarrassingly parallel computations, and we address the issue of parallel random number generation.Code optimization; Econometrics; High-performance computing; Matrix-programming language; Monte Carlo; MPI; Ox; Parallel computing; Random number generation.

    GeantV: Results from the prototype of concurrent vector particle transport simulation in HEP

    Full text link
    Full detector simulation was among the largest CPU consumer in all CERN experiment software stacks for the first two runs of the Large Hadron Collider (LHC). In the early 2010's, the projections were that simulation demands would scale linearly with luminosity increase, compensated only partially by an increase of computing resources. The extension of fast simulation approaches to more use cases, covering a larger fraction of the simulation budget, is only part of the solution due to intrinsic precision limitations. The remainder corresponds to speeding-up the simulation software by several factors, which is out of reach using simple optimizations on the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport codes in order to make them benefit from fine-grained parallelism features such as vectorization, but also from increased code and data locality. This paper presents extensively the results and achievements of this R&D, as well as the conclusions and lessons learnt from the beta prototype.Comment: 34 pages, 26 figures, 24 table

    FPGA-based design and implementation of spread-spectrum schemes for conducted-noise reduction in DC-DC converters

    Get PDF
    2009 IEEE International Conference on Industrial Technology - (ICIT) : Churchill, Victoria, Australia, 2009.02.10-2009.02.1

    OpenMOLE, a workflow engine specifically tailored for the distributed exploration of simulation models

    Get PDF
    International audienceComplex-systems describe multiple levels of collective structure and organization. In such systems, the emergence of global behaviour from local interactions is generally studied through large scale experiments on numerical models. This analysis generates important computation loads which require the use of multi-core servers, clusters or grid computing. Dealing with such large scale executions is especially challenging for modellers who don't possess the theoretical and methodological skills required to take advantage of high performance computing environments. That's why we have designed a cloud approach for model experimentation. This approach has been implemented in OpenMOLE (Open MOdel Experiment) as a Domain Specific Language (DSL) that leverages the naturally parallel aspect of model experiments. The OpenMOLE DSL has been designed to explore user-supplied models. It delegates transparently their numerous executions to remote execution environment. From a user perspective, those environments are viewed as services providing computing power, therefore no technical detail is ever exposed. This paper presents the OpenMOLE DSL through the example of a toy model exploration and through the automated calibration of a real-world complex system model in the field of geography
    corecore