11,334 research outputs found
Independence, Relative Randomness, and PA Degrees
We study pairs of reals that are mutually Martin-L\"{o}f random with respect
to a common, not necessarily computable probability measure. We show that a
generalized version of van Lambalgen's Theorem holds for non-computable
probability measures, too. We study, for a given real , the
\emph{independence spectrum} of , the set of all so that there exists a
probability measure so that and is
-random. We prove that if is r.e., then no set
is in the independence spectrum of . We obtain applications of this fact to
PA degrees. In particular, we show that if is r.e.\ and is of PA degree
so that , then
Relaxed Bell inequalities and Kochen-Specker theorems
The combination of various physically plausible properties, such as no
signaling, determinism, and experimental free will, is known to be incompatible
with quantum correlations. Hence, these properties must be individually or
jointly relaxed in any model of such correlations. The necessary degrees of
relaxation are quantified here, via natural distance and information-theoretic
measures. This allows quantitative comparisons between different models in
terms of the resources, such as the number of bits, of randomness,
communication, and/or correlation, that they require. For example, measurement
dependence is a relatively strong resource for modeling singlet state
correlations, with only 1/15 of one bit of correlation required between
measurement settings and the underlying variable. It is shown how various
'relaxed' Bell inequalities may be obtained, which precisely specify the
complementary degrees of relaxation required to model any given violation of a
standard Bell inequality. The robustness of a class of Kochen-Specker theorems,
to relaxation of measurement independence, is also investigated. It is shown
that a theorem of Mermin remains valid unless measurement independence is
relaxed by 1/3. The Conway-Kochen 'free will' theorem and a result of Hardy are
less robust, failing if measurement independence is relaxed by only 6.5% and
4.5%, respectively. An appendix shows the existence of an outcome independent
model is equivalent to the existence of a deterministic model.Comment: 19 pages (including 3 appendices); v3: minor clarifications, to
appear in PR
Consciousness as a State of Matter
We examine the hypothesis that consciousness can be understood as a state of
matter, "perceptronium", with distinctive information processing abilities. We
explore five basic principles that may distinguish conscious matter from other
physical systems such as solids, liquids and gases: the information,
integration, independence, dynamics and utility principles. If such principles
can identify conscious entities, then they can help solve the quantum
factorization problem: why do conscious observers like us perceive the
particular Hilbert space factorization corresponding to classical space (rather
than Fourier space, say), and more generally, why do we perceive the world
around us as a dynamic hierarchy of objects that are strongly integrated and
relatively independent? Tensor factorization of matrices is found to play a
central role, and our technical results include a theorem about Hamiltonian
separability (defined using Hilbert-Schmidt superoperators) being maximized in
the energy eigenbasis. Our approach generalizes Giulio Tononi's integrated
information framework for neural-network-based consciousness to arbitrary
quantum systems, and we find interesting links to error-correcting codes,
condensed matter criticality, and the Quantum Darwinism program, as well as an
interesting connection between the emergence of consciousness and the emergence
of time.Comment: Replaced to match accepted CSF version; discussion improved, typos
corrected. 36 pages, 15 fig
Review of High-Quality Random Number Generators
This is a review of pseudorandom number generators (RNG's) of the highest
quality, suitable for use in the most demanding Monte Carlo calculations. All
the RNG's we recommend here are based on the Kolmogorov-Anosov theory of mixing
in classical mechanical systems, which guarantees under certain conditions and
in certain asymptotic limits, that points on the trajectories of these systems
can be used to produce random number sequences of exceptional quality. We
outline this theory of mixing and establish criteria for deciding which RNG's
are sufficiently good approximations to the ideal mathematical systems that
guarantee highest quality. The well-known RANLUX (at highest luxury level) and
its recent variant RANLUX++ are seen to meet our criteria, and some of the
proposed versions of MIXMAX can be modified easily to meet the same criteria.Comment: 21 pages, 4 figure
From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation
In statistical prediction, classical approaches for model selection and model
evaluation based on covariance penalties are still widely used. Most of the
literature on this topic is based on what we call the "Fixed-X" assumption,
where covariate values are assumed to be nonrandom. By contrast, it is often
more reasonable to take a "Random-X" view, where the covariate values are
independently drawn for both training and prediction. To study the
applicability of covariance penalties in this setting, we propose a
decomposition of Random-X prediction error in which the randomness in the
covariates contributes to both the bias and variance components. This
decomposition is general, but we concentrate on the fundamental case of least
squares regression. We prove that in this setting the move from Fixed-X to
Random-X prediction results in an increase in both bias and variance. When the
covariates are normally distributed and the linear model is unbiased, all terms
in this decomposition are explicitly computable, which yields an extension of
Mallows' Cp that we call . also holds asymptotically for certain
classes of nonnormal covariates. When the noise variance is unknown, plugging
in the usual unbiased estimate leads to an approach that we call ,
which is closely related to Sp (Tukey 1967), and GCV (Craven and Wahba 1978).
For excess bias, we propose an estimate based on the "shortcut-formula" for
ordinary cross-validation (OCV), resulting in an approach we call .
Theoretical arguments and numerical simulations suggest that is
typically superior to OCV, though the difference is small. We further examine
the Random-X error of other popular estimators. The surprising result we get
for ridge regression is that, in the heavily-regularized regime, Random-X
variance is smaller than Fixed-X variance, which can lead to smaller overall
Random-X error
- …