75 research outputs found

    A Comparative Study of Some Pseudorandom Number Generators

    Full text link
    We present results of an extensive test program of a group of pseudorandom number generators which are commonly used in the applications of physics, in particular in Monte Carlo simulations. The generators include public domain programs, manufacturer installed routines and a random number sequence produced from physical noise. We start by traditional statistical tests, followed by detailed bit level and visual tests. The computational speed of various algorithms is also scrutinized. Our results allow direct comparisons between the properties of different generators, as well as an assessment of the efficiency of the various test methods. This information provides the best available criterion to choose the best possible generator for a given problem. However, in light of recent problems reported with some of these generators, we also discuss the importance of developing more refined physical tests to find possible correlations not revealed by the present test methods.Comment: University of Helsinki preprint HU-TFT-93-22 (minor changes in Tables 2 and 7, and in the text, correspondingly

    Long term memories of developed and emerging markets: using the scaling analysis to characterize their stage of development

    Full text link
    The scaling properties encompass in a simple analysis many of the volatility characteristics of financial markets. That is why we use them to probe the different degree of markets development. We empirically study the scaling properties of daily Foreign Exchange rates, Stock Market indices and fixed income instruments by using the generalized Hurst approach. We show that the scaling exponents are associated with characteristics of the specific markets and can be used to differentiate markets in their stage of development. The robustness of the results is tested by both Monte-Carlo studies and a computation of the scaling in the frequency-domain.Comment: 46 pages, 7 figures, accepted for publication in Journal of Banking & Financ

    Computers and Liquid State Statistical Mechanics

    Full text link
    The advent of electronic computers has revolutionised the application of statistical mechanics to the liquid state. Computers have permitted, for example, the calculation of the phase diagram of water and ice and the folding of proteins. The behaviour of alkanes adsorbed in zeolites, the formation of liquid crystal phases and the process of nucleation. Computer simulations provide, on one hand, new insights into the physical processes in action, and on the other, quantitative results of greater and greater precision. Insights into physical processes facilitate the reductionist agenda of physics, whilst large scale simulations bring out emergent features that are inherent (although far from obvious) in complex systems consisting of many bodies. It is safe to say that computer simulations are now an indispensable tool for both the theorist and the experimentalist, and in the future their usefulness will only increase. This chapter presents a selective review of some of the incredible advances in condensed matter physics that could only have been achieved with the use of computers.Comment: 22 pages, 2 figures. Chapter for a boo

    Application of Kolmogorov complexity and universal codes to identity testing and nonparametric testing of serial independence for time series

    Get PDF
    We show that Kolmogorov complexity and such its estimators as universal codes (or data compression methods) can be applied for hypotheses testing in a framework of classical mathematical statistics. The methods for identity testing and nonparametric testing of serial independence for time series are suggested.Comment: submitte

    A two-factor model for electricity prices with dynamic volatility

    Get PDF
    The wavelet transform is used to identify a biannual and an annual seasonality in the Phelix Day Peak and to separate the long-term trend from its short-term motion. The short-term/long-term model for commodity prices of Schwartz & Smith (2000) is applied but generalised to account for weekly periodicities and time-varying volatility. Eventually we find a bivariate SARMA-CCC-GARCH model to fit best. Moreover it surpasses the goodness of fit of an univariate GARCH model, which shows that the additional effort of dealing with a two-factor model is worthwile. --Wavelets,Seasonal Filter,Relative Wavelet Energy,Multivariate GARCH,Energy Price Modelling

    Experimental imaging and Monte Carlo modeling of ultrafast pulse propagation in thin scattering slabs

    Get PDF
    Significance: Most radiative transport problems in turbid media are typically associated with mm or cm scales, leading to typical time scales in the range of hundreds of ps or more. In certain cases, however, much thinner layers can also be relevant, which can dramatically alter the overall transport properties of a scattering medium. Studying scattering in these thin layers requires ultrafast detection techniques and adaptations to the common Monte Carlo (MC) approach.Aim: We aim to discuss a few relevant aspects for the simulation of light transport in thin scattering membranes, and compare the obtained numerical results with experimental measurements based on an all-optical gating technique.Approach: A thin membrane with controlled scattering properties based on polymer-dispersed TiO2 nanoparticles is fabricated for experimental validation. Transmittance measurements are compared against a custom open-source MC implementation including specific pulse profiles for tightly focused femtosecond laser pulses.Results: Experimental transmittance data of ultrafast pulses through a thin scattering sample are compared with MC simulations in the spatiotemporal domain to retrieve its scattering properties. The results show good agreement also at short distances and time scales.Conclusions: When simulating light transport in scattering membranes with thicknesses in the orders of tens of micrometer, care has to be taken when describing the temporal, spatial, and divergence profiles of the source term, as well as the possible truncation of step length distributions, which could be introduced by simple strategies for the generation of random exponentially distributed variables. (C) The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License

    Computing marginal posterior densities of genetic parameters of a multiple trait animal model using Laplace approximation or Gibbs sampling

    Get PDF
    Two procedures for computing the marginal posterior density of heritabilities or genetic correlations, ie, Laplace’s method to approximate integrals and Gibbs sampling, are compared. A multiple trait animal model is considered with one random effect, no missing observations and identical models for all traits. The Laplace approximation consists in computing the marginal posterior density for different values of the parameter of interest. This approximation requires the repeated evaluation of traces and determinants, which are easy to compute once the eigenvalues of a matrix of dimension equal to the number of animals are determined. These eigenvalues can be efficiently computed by the Lanczos algorithm. The Gibbs sampler generates samples from the joint posterior density. These samples are used to estimate the marginal posterior density, which is exact up to a Monte-Carlo error. Both procedures were applied to a data set with semen production traits of 1957 Normande bulls. The traits analyzed were volume of the ejaculate, motility score and spermatozoa concentration. The Laplace approximation yielded very accurate approximations of the marginal posterior density for all parameters with much lower computing costs.Deux procédures de calcul de la densité marginale a posteriori des héritabilités et des corrélations génétiques, à savoir la méthode de Laplace pour l’approximation des intégrales et l’échantillonnage de Gibbs, sont comparées. Pour cela, nous considérons un modèle animal multicaractère avec un effet aléatoire, sans observations manquantes et avec un modèle identique pour chaque caractère. L’approximation de Laplace conduit au calcul de la densité marginale a posteriori pour différentes valeurs du paramètre qui nous intéresse. Cela nécessite l’évaluation répétée de traces et de déterminants qui sont simples à calculer une fois que les valeurs propres d’une matrice de dimension égale au nombre d’animaux ont été déterminées. Ces valeurs propres peuvent être calculées de manière efficace à l’aide de l’algorithme de Lanczos. L’échantillonnage de Gibbs génère des échantillons de la densité conjointe a posteriori. Ces échantillons sont utilisés pour estimer la densité marginale a posteriori, qui est exacte à une erreur de Monte Carlo près. Les deux procédures ont été appliquées à un fichier de données comportant les caractères de production de semence recueillies sur 1957 taureaux Normands. Les caractères analysés étaient le volume de l’éjaculat, une note de motilité et la concentration en spermatozoïdes. L’approximation laplacienne a permis une approximation très précise de la densité marginale a posteriori de tous les paramètres avec un coût de calcul beaucoup plus réduit
    • …
    corecore