16,687 research outputs found
Markov Chain Monte Carlo Based on Deterministic Transformations
In this article we propose a novel MCMC method based on deterministic
transformations T: X x D --> X where X is the state-space and D is some set
which may or may not be a subset of X. We refer to our new methodology as
Transformation-based Markov chain Monte Carlo (TMCMC). One of the remarkable
advantages of our proposal is that even if the underlying target distribution
is very high-dimensional, deterministic transformation of a one-dimensional
random variable is sufficient to generate an appropriate Markov chain that is
guaranteed to converge to the high-dimensional target distribution. Apart from
clearly leading to massive computational savings, this idea of
deterministically transforming a single random variable very generally leads to
excellent acceptance rates, even though all the random variables associated
with the high-dimensional target distribution are updated in a single block.
Since it is well-known that joint updating of many random variables using
Metropolis-Hastings (MH) algorithm generally leads to poor acceptance rates,
TMCMC, in this regard, seems to provide a significant advance. We validate our
proposal theoretically, establishing the convergence properties. Furthermore,
we show that TMCMC can be very effectively adopted for simulating from doubly
intractable distributions.
TMCMC is compared with MH using the well-known Challenger data, demonstrating
the effectiveness of of the former in the case of highly correlated variables.
Moreover, we apply our methodology to a challenging posterior simulation
problem associated with the geostatistical model of Diggle et al. (1998),
updating 160 unknown parameters jointly, using a deterministic transformation
of a one-dimensional random variable. Remarkable computational savings as well
as good convergence properties and acceptance rates are the results.Comment: 28 pages, 3 figures; Longer abstract inside articl
Link projections and flypes
Let \Pi be a link projection in S^2. John Conway and later Francis Bonahon
and Larry Siebenmann undertook to split into canonical pieces. These
pieces received different names: basic or polyhedral diagrams on one hand,
rational, algebraic, bretzel, arborescent diagrams on the other hand. This
paper proposes a thorough presentation of the theory, known to happy fews. We
apply the existence and uniqueness theorem for the canonical decomposition to
the classification of Haseman circles and to the localisation of the flypes
Evolution of Theories of Mind
This paper studies the evolution of peoples' models of how other people think -- their theories of mind. First, this is formalized within the level-k model, which postulates a hierarchy of types, such that type k plays a k times iterated best response to the uniform distribution. It is found that, under plausible conditions, lower types co-exist with higher types. The results are extended to a model of learning, in which type k plays a k times iterated best response the average of past play. The results are also extended to the cognitive hierarchy model, and to the introduction of a type that plays a Nash equilibrium.Theory of Mind; Evolution; Learning; Level-k; Fictitious Play; Cognitive Hierarchy
A Unique "Nonnegative" Solution to an Underdetermined System: from Vectors to Matrices
This paper investigates the uniqueness of a nonnegative vector solution and
the uniqueness of a positive semidefinite matrix solution to underdetermined
linear systems. A vector solution is the unique solution to an underdetermined
linear system only if the measurement matrix has a row-span intersecting the
positive orthant. Focusing on two types of binary measurement matrices,
Bernoulli 0-1 matrices and adjacency matrices of general expander graphs, we
show that, in both cases, the support size of a unique nonnegative solution can
grow linearly, namely O(n), with the problem dimension n. We also provide
closed-form characterizations of the ratio of this support size to the signal
dimension. For the matrix case, we show that under a necessary and sufficient
condition for the linear compressed observations operator, there will be a
unique positive semidefinite matrix solution to the compressed linear
observations. We further show that a randomly generated Gaussian linear
compressed observations operator will satisfy this condition with
overwhelmingly high probability
Inferring Algebraic Effects
We present a complete polymorphic effect inference algorithm for an ML-style
language with handlers of not only exceptions, but of any other algebraic
effect such as input & output, mutable references and many others. Our main aim
is to offer the programmer a useful insight into the effectful behaviour of
programs. Handlers help here by cutting down possible effects and the resulting
lengthy output that often plagues precise effect systems. Additionally, we
present a set of methods that further simplify the displayed types, some even
by deliberately hiding inferred information from the programmer
- …