4,720 research outputs found
A New Relative Skill Measure for Games with Chance Elements
An interesting aspect of games is the relative extent to which a player can positively influence his results by making appropriate strategic choices. This question is closely related to the issue of how to distinguish between games of skill and games of chance. The distinction between these two types of games is definitely interesting from a juridical point of view. Borm and Van der Genugten (2001) presented a method to measure the skill level of a game. In principle, their measure can serve as a juridical tool for the classification of games with respect to skill. In this paper we present a modification of the measure. The main difference is that this new definition does not automatically classify incomplete information games without chance moves as games of skill. We use a coin game and a simplified version of standard drawpoker as an illustration.games of skill;games of chance
Exact transmission moments in one-dimensional weak localization and single-parameter scaling
We obtain for the first time the expressions for the mean and the variance of
the transmission coefficient for an Anderson chain in the weak localization
regime, using exact expansions of the complex transmission- and reflection
coefficients to fourth order in the weakly disordered site energies. These
results confirm the validity of single-parameter scaling theory in a domain
where the higher transmission cumulants may be neglected. We compare our
results with earlier results for transmission cumulants in the weak
localization domain based on the phase randomization hypothesis
Agnostic notes on regression adjustments to experimental data: Reexamining Freedman's critique
Freedman [Adv. in Appl. Math. 40 (2008) 180-193; Ann. Appl. Stat. 2 (2008)
176-196] critiqued ordinary least squares regression adjustment of estimated
treatment effects in randomized experiments, using Neyman's model for
randomization inference. Contrary to conventional wisdom, he argued that
adjustment can lead to worsened asymptotic precision, invalid measures of
precision, and small-sample bias. This paper shows that in sufficiently large
samples, those problems are either minor or easily fixed. OLS adjustment cannot
hurt asymptotic precision when a full set of treatment-covariate interactions
is included. Asymptotically valid confidence intervals can be constructed with
the Huber-White sandwich standard error estimator. Checks on the asymptotic
approximations are illustrated with data from Angrist, Lang, and Oreopoulos's
[Am. Econ. J.: Appl. Econ. 1:1 (2009) 136--163] evaluation of strategies to
improve college students' achievement. The strongest reasons to support
Freedman's preference for unadjusted estimates are transparency and the dangers
of specification search.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS583 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
On Integration Methods Based on Scrambled Nets of Arbitrary Size
We consider the problem of evaluating for a function . In situations where
can be approximated by an estimate of the form
, with a point set in
, it is now well known that the Monte Carlo
convergence rate can be improved by taking for the first
points, , of a scrambled
-sequence in base . In this paper we derive a bound for the
variance of scrambled net quadrature rules which is of order
without any restriction on . As a corollary, this bound allows us to provide
simple conditions to get, for any pattern of , an integration error of size
for functions that depend on the quadrature size . Notably,
we establish that sequential quasi-Monte Carlo (M. Gerber and N. Chopin, 2015,
\emph{J. R. Statist. Soc. B, to appear.}) reaches the
convergence rate for any values of . In a numerical study, we show that for
scrambled net quadrature rules we can relax the constraint on without any
loss of efficiency when the integrand is a discontinuous function
while, for sequential quasi-Monte Carlo, taking may only
provide moderate gains.Comment: 27 pages, 2 figures (final version, to appear in The Journal of
Complexity
Alternate Samplingmethods for Estimating Multivariate Normal Probabilities
We study the performance of alternative sampling methods for estimating multivariate normal probabilities through the GHK simulator. The sampling methods are randomized versions of some quasi-Monte Carlo samples (Halton, Niederreiter, Niederreiter-Xing sequences and lattice points) and some samples based on orthogonal arrays (Latin hypercube, orthogonal array and orthogonal array based Latin hypercube samples). In general, these samples turn out to have a better performance than Monte Carlo and antithetic Monte Carlo samples. Improvements over these are large for low-dimensional (4 and 10) cases and still significant for dimensions as large as 50
Generating ambiguity in the laboratory
This article develops a method for drawing samples from which it is impossible to infer any quantile or moment of the underlying distribution. The method provides researchers with a way to give subjects the experience of ambiguity. In any experiment, learning the distribution from experience is impossible for the subjects, essentially because it is impossible for the experimenter. We describe our method mathematically, illustrate it in simulations, and then test it in a laboratory experiment. Our technique does not withhold sampling information, does not assume that the subject is incapable of making statistical inferences, is replicable across experiments, and requires no special apparatus. We compare our method to the techniques used in related experiments that attempt to produce an ambiguous experience for the subjects.ambiguity; Ellsberg; Knightian uncertainty; laboratory experiments; ignorance; vagueness JEL Classications: C90; C91; C92; D80; D81
- …