3,002 research outputs found
Evolutionary prisoner's dilemma game on a square lattice
A simplified prisoner's game is studied on a square lattice when the players
interacting with their neighbors can follow only two strategies: to cooperate
(C) or to defect (D) unconditionally. The players updated in a random sequence
have a chance to adopt one of the neighboring strategies with a probability
depending on the payoff difference. Using Monte Carlo simulations and dynamical
cluster techniques we study the density of cooperators in the stationary
state. This system exhibits a continuous transition between the two absorbing
state when varying the value of temptation to defect. In the limits
and 1 we have observed critical transitions belonging to the universality class
of directed percolation.Comment: 6 pages including 6 figure
Kolmogorov Complexity and Solovay Functions
Solovay proved that there exists a computable upper bound f of the
prefix-free Kolmogorov complexity function K such that f (x) = K(x) for
infinitely many x. In this paper, we consider the class of computable functions
f such that K(x) <= f (x)+O(1) for all x and f (x) <= K(x) + O(1) for
infinitely many x, which we call Solovay functions. We show that Solovay
functions present interesting connections with randomness notions such as
Martin-L\"of randomness and K-triviality
Optimal linear reconstruction of dark matter from halo catalogs
We derive the weight function w(M) to apply to dark-matter halos that
minimizes the stochasticity between the weighted halo distribution and its
underlying mass density field. The optimal w(M) depends on the range of masses
being used in the estimator. In N-body simulations, the Poisson estimator is up
to 15 times noisier than the optimal. Implementation of the optimal weight
yields significantly lower stochasticity than weighting halos by their mass,
bias or equal. Optimal weighting could make cosmological tests based on the
matter power spectrum or cross-correlations much more powerful and/or
cost-effective. A volume-limited measurement of the mass power spectrum at
k=0.2h/Mpc over the entire z<1 universe could ideally be done using only 6
million redshifts of halos with mass M>6\times10^{13}h^{-1}M_\odot
(1\times10^{13}) at z=0 (z=1); this is 5 times fewer than the Poisson model
predicts. Using halo occupancy distributions (HOD) we find that
uniformly-weighted catalogs of luminous red galaxies require >3 times more
redshifts than an optimally-weighted halo catalog to reconstruct the mass to
the same accuracy. While the mean HODs of galaxies above a threshold luminosity
are similar to the optimal w(M), the stochasticity of the halo occupation
degrades the mass estimator. Blue or emission-line galaxies are about 100 times
less efficient at reconstructing mass than an optimal weighting scheme. This
suggests an efficient observational approach of identifying and weighting halos
with a deep photo-z survey before conducting a spectroscopic survey. The
optimal w(M) and mass-estimator stochasticity predicted by the standard halo
model for M>10^{12}h^{-1}M_\odot are in reasonable agreement with our
measurements, with the important exceptions that the halos must be assumed to
be linearly biased samples of a "halo field" that is distinct from the mass
field. (Abridged)Comment: Added Figure 3 to show the scatter between the weighted halo field vs
the mass field, Accepted for publication in MNRA
The effects of heterogeneity on stochastic cycles in epidemics
Models of biological processes are often subject to different sources of
noise. Developing an understanding of the combined effects of different types
of uncertainty is an open challenge. In this paper, we study a variant of the
susceptible-infective-recovered model of epidemic spread, which combines both
agent-to-agent heterogeneity and intrinsic noise. We focus on epidemic cycles,
driven by the stochasticity of infection and recovery events, and study in
detail how heterogeneity in susceptibilities and propensities to pass on the
disease affects these quasi-cycles. While the system can only be described by a
large hierarchical set of equations in the transient regime, we derive a
reduced closed set of equations for population-level quantities in the
stationary regime. We analytically obtain the spectra of quasi-cycles in the
linear-noise approximation. We find that the characteristic frequency of these
cycles is typically determined by population averages of susceptibilities and
infectivities, but that their amplitude depends on higher-order moments of the
heterogeneity. We also investigate the synchronisation properties and phase lag
between different groups of susceptible and infected individuals.Comment: Main text 16 pages, 9 figures. Supplement 5 page
Enhanced goal-oriented error assessment and computational strategies in adaptive reduced basis solver for stochastic problems
This work focuses on providing accurate low-cost approximations of stochastic ¿nite elements simulations in the framework of linear elasticity. In a previous work, an adaptive strategy was introduced as an improved Monte-Carlo method for multi-dimensional large stochastic problems. We provide here a complete analysis of the method including a new enhanced goal-oriented error estimator and estimates of CPU (computational processing unit) cost gain. Technical insights of these two topics are presented in details, and numerical examples show the interest of these new developments.Postprint (author's final draft
Economic Theory as Successive Approximations of Statistical Moments
This paper highlights the links between the descriptions of macroeconomic
variables and statistical moments of market trade, price, and return. We
consider economic transactions during the averaging time interval {\Delta} as
the exclusive matter that determines the change of any economic variables. We
regard the stochasticity of market trade values and volumes during {\Delta} as
the only root of the random properties of price and return. We describe how the
market-based n-th statistical moments of price and return during {\Delta}
depend on the n-th statistical moments of trade values and volumes or equally
on sums during {\Delta} of the n-th power of market trade values and volumes.
We introduce the secondary averaging procedure that defines statistical moments
of trade, price, and return during the averaging interval {\Delta}2>>{\Delta}.
As well, the secondary averaging during {\Delta}2>>{\Delta} introduces
statistical moments of macroeconomic variables, which were determined as sums
of economic transactions during {\Delta}. In the coming years, predictions of
the market-based probabilities of price and return will be limited by
Gaussian-type distributions determined by the first two statistical moments. We
discuss the roots of the internal weakness of the conventional hedging tool,
Value-at-Risk, that could not be solved and thus remain the source of
additional risks and losses. One should consider economic theory as a set of
successive approximations, each of which describes the next array of the n-th
statistical moments of market transactions and macroeconomic variables, which
are repeatedly averaged during the sequence of increasing time intervals.Comment: 20 page
- …