42,649 research outputs found

    Calculation of aggregate loss distributions

    Full text link
    Estimation of the operational risk capital under the Loss Distribution Approach requires evaluation of aggregate (compound) loss distributions which is one of the classic problems in risk theory. Closed-form solutions are not available for the distributions typically used in operational risk. However with modern computer processing power, these distributions can be calculated virtually exactly using numerical methods. This paper reviews numerical algorithms that can be successfully used to calculate the aggregate loss distributions. In particular Monte Carlo, Panjer recursion and Fourier transformation methods are presented and compared. Also, several closed-form approximations based on moment matching and asymptotic result for heavy-tailed distributions are reviewed

    Kinetic Solvers with Adaptive Mesh in Phase Space

    Full text link
    An Adaptive Mesh in Phase Space (AMPS) methodology has been developed for solving multi-dimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a tree of trees data structure. The mesh in r-space is automatically generated around embedded boundaries and dynamically adapted to local solution properties. The mesh in v-space is created on-the-fly for each cell in r-space. Mappings between neighboring v-space trees implemented for the advection operator in configuration space. We have developed new algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive mesh in velocity space: importance sampling, multi-point projection method, and the variance reduction method. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions in a Lorentz gas. New AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce computational cost and memory usage for solving challenging kinetic problems

    Quantum probabilistic sampling of multipartite 60-qubit Bell inequality violations

    Full text link
    We show that violation of genuine multipartite Bell inequalities can be obtained with sampled, probabilistic phase space methods. These genuine Bell violations cannot be replicated if any part of the system is described by a local hidden variable theory. The Bell violations are simulated probabilistically using quantum phase-space representations. We treat mesoscopically large Greenberger-Horne-Zeilinger (GHZ) states having up to 60 qubits, using both a multipartite SU(2) Q-representation and the positive P-representation. Surprisingly, we find that sampling with phase-space distributions can be exponentially faster than experiment. This is due to the classical parallelism inherent in the simulation of quantum measurements using phase-space methods. Our probabilistic sampling method predicts a contradiction with local realism of "Schr\"odinger-cat" states that can be realized as a GHZ spin state, either in ion traps or with photonic qubits. We also present a quantum simulation of the observed super-decoherence of the ion-trap "cat" state, using a phenomenological noise model

    Simplified Onsager theory for isotropic-nematic phase equilibria of length polydisperse hard rods

    Full text link
    Polydispersity is believed to have important effects on the formation of liquid crystal phases in suspensions of rod-like particles. To understand such effects, we analyse the phase behaviour of thin hard rods with length polydispersity. Our treatment is based on a simplified Onsager theory, obtained by truncating the series expansion of the angular dependence of the excluded volume. We describe the model and give the full phase equilibrium equations; these are then solved numerically using the moment free energy method which reduces the problem from one with an infinite number of conserved densities to one with a finite number of effective densities that are moments of the full density distribution. The method yields exactly the onset of nematic ordering. Beyond this, results are approximate but we show that they can be made essentially arbitrarily precise by adding adaptively chosen extra moments, while still avoiding the numerical complications of a direct solution of the full phase equilibrium conditions. We investigate in detail the phase behaviour of systems with three different length distributions: a (unimodal) Schulz distribution, a bidisperse distribution and a bimodal mixture of two Schulz distributions which interpolates between these two cases. A three-phase isotropic-nematic-nematic coexistence region is shown to exist for the bimodal and bidisperse length distributions if the ratio of long and short rod lengths is sufficiently large, but not for the unimodal one. We systematically explore the topology of the phase diagram as a function of the width of the length distribution and of the rod length ratio in the bidisperse and bimodal cases.Comment: 18 pages, 16 figure

    Ensemble Transport Adaptive Importance Sampling

    Full text link
    Markov chain Monte Carlo methods are a powerful and commonly used family of numerical methods for sampling from complex probability distributions. As applications of these methods increase in size and complexity, the need for efficient methods increases. In this paper, we present a particle ensemble algorithm. At each iteration, an importance sampling proposal distribution is formed using an ensemble of particles. A stratified sample is taken from this distribution and weighted under the posterior, a state-of-the-art ensemble transport resampling method is then used to create an evenly weighted sample ready for the next iteration. We demonstrate that this ensemble transport adaptive importance sampling (ETAIS) method outperforms MCMC methods with equivalent proposal distributions for low dimensional problems, and in fact shows better than linear improvements in convergence rates with respect to the number of ensemble members. We also introduce a new resampling strategy, multinomial transformation (MT), which while not as accurate as the ensemble transport resampler, is substantially less costly for large ensemble sizes, and can then be used in conjunction with ETAIS for complex problems. We also focus on how algorithmic parameters regarding the mixture proposal can be quickly tuned to optimise performance. In particular, we demonstrate this methodology's superior sampling for multimodal problems, such as those arising from inference for mixture models, and for problems with expensive likelihoods requiring the solution of a differential equation, for which speed-ups of orders of magnitude are demonstrated. Likelihood evaluations of the ensemble could be computed in a distributed manner, suggesting that this methodology is a good candidate for parallel Bayesian computations

    Monte Carlo techniques for real-time quantum dynamics

    Full text link
    The stochastic-gauge representation is a method of mapping the equation of motion for the quantum mechanical density operator onto a set of equivalent stochastic differential equations. One of the stochastic variables is termed the "weight", and its magnitude is related to the importance of the stochastic trajectory. We investigate the use of Monte Carlo algorithms to improve the sampling of the weighted trajectories and thus reduce sampling error in a simulation of quantum dynamics. The method can be applied to calculations in real time, as well as imaginary time for which Monte Carlo algorithms are more-commonly used. The method is applicable when the weight is guaranteed to be real, and we demonstrate how to ensure this is the case. Examples are given for the anharmonic oscillator, where large improvements over stochastic sampling are observed.Comment: 28 pages, submitted to J. Comp. Phy

    A new, efficient algorithm for the Forest Fire Model

    Full text link
    The Drossel-Schwabl Forest Fire Model is one of the best studied models of non-conservative self-organised criticality. However, using a new algorithm, which allows us to study the model on large statistical and spatial scales, it has been shown to lack simple scaling. We thereby show that the considered model is not critical. This paper presents the algorithm and its parallel implementation in detail, together with large scale numerical results for several observables. The algorithm can easily be adapted to related problems such as percolation.Comment: 38 pages, 28 figures, REVTeX 4, RMP style; V2 is for clarifications as well as corrections and update of reference

    Analytical Solutions to the Mass-Anisotropy Degeneracy with Higher Order Jeans Analysis: A General Method

    Full text link
    The Jeans analysis is often used to infer the total density of a system by relating the velocity moments of an observable tracer population to the underlying gravitational potential. This technique has recently been applied in the search for Dark Matter in objects such as dwarf spheroidal galaxies where the presence of Dark Matter is inferred via stellar velocities. A precise account of the density is needed to constrain the expected gamma ray flux from DM self-annihilation and to distinguish between cold and warm dark matter models. Unfortunately the traditional method of fitting the second order Jeans equation to the tracer dispersion suffers from an unbreakable degeneracy of solutions due to the unknown velocity anisotropy of the projected system. To tackle this degeneracy one can appeal to higher moments of the Jeans equation. By introducing an analog to the Binney anisotropy parameter at fourth order, beta' we create a framework that encompasses all solutions to the fourth order Jeans equations rather than those in the literature that impose unnecessary correlations between anisotropy of second and fourth order moments. The condition beta' = f(beta) ensures that the degeneracy is lifted and we interpret the separable augmented density system as the order-independent case beta'= beta. For a generic choice of beta' we present the line of sight projection of the fourth moment and how it could be incorporated into a joint likelihood analysis of the dispersion and kurtosis. Having presented the mathematical framework, we then use it to develop a statistical method for the purpose of placing constraints on dark matter density parameters from discrete velocity data. The method is tested on simulated dwarf spheroidal data sets leading to results which motivate study of real dwarf spheroidal data sets.Comment: 21 pages, 15 figures. Accepted by MNRAS. Typo corrected in eq. 3

    Benchmarking of Gaussian boson sampling using two-point correlators

    Get PDF
    Gaussian boson sampling is a promising scheme for demonstrating a quantum computational advantage using photonic states that are accessible in a laboratory and, thus, offer scalable sources of quantum light. In this contribution, we study two-point photon-number correlation functions to gain insight into the interference of Gaussian states in optical networks. We investigate the characteristic features of statistical signatures which enable us to distinguish classical from quantum interference. In contrast to the typical implementation of boson sampling, we find additional contributions to the correlators under study which stem from the phase dependence of Gaussian states and which are not observable when Fock states interfere. Using the first three moments, we formulate the tools required to experimentally observe signatures of quantum interference of Gaussian states using two outputs only. By considering the current architectural limitations in realistic experiments, we further show that a statistically significant discrimination between quantum and classical interference is possible even in the presence of loss, noise, and a finite photon-number resolution. Therefore, we formulate and apply a theoretical framework to benchmark the quantum features of Gaussian boson sampling under realistic conditions
    corecore