10,373 research outputs found
A Cancellation Law for Probabilistic Processes
We show a cancellation property for probabilistic choice. If distributions mu
+ rho and nu + rho are branching probabilistic bisimilar, then distributions mu
and nu are also branching probabilistic bisimilar. We do this in the setting of
a basic process language involving non-deterministic and probabilistic choice
and define branching probabilistic bisimilarity on distributions. Despite the
fact that the cancellation property is very elegant and concise, we failed to
provide a short and natural combinatorial proof. Instead we provide a proof
using metric topology. Our major lemma is that every distribution can be
unfolded into an equivalent stable distribution, where the topological
arguments are required to deal with uncountable branching.Comment: In Proceedings EXPRESS/SOS2023, arXiv:2309.0578
Local tomography and the Jordan structure of quantum theory
Using a result of H. Hanche-Olsen, we show that (subject to fairly natural
constraints on what constitutes a system, and on what constitutes a composite
system), orthodox finite-dimensional complex quantum mechanics with
superselection rules is the only non-signaling probabilistic theory in which
(i) individual systems are Jordan algebras (equivalently, their cones of
unnormalized states are homogeneous and self-dual), (ii) composites are locally
tomographic (meaning that states are determined by the joint probabilities they
assign to measurement outcomes on the component systems) and (iii) at least one
system has the structure of a qubit. Using this result, we also characterize
finite dimensional quantum theory among probabilistic theories having the
structure of a dagger-monoidal category
Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet
Various optimality properties of universal sequence predictors based on
Bayes-mixtures in general, and Solomonoff's prediction scheme in particular,
will be studied. The probability of observing at time , given past
observations can be computed with the chain rule if the true
generating distribution of the sequences is known. If
is unknown, but known to belong to a countable or continuous class \M
one can base ones prediction on the Bayes-mixture defined as a
-weighted sum or integral of distributions \nu\in\M. The cumulative
expected loss of the Bayes-optimal universal prediction scheme based on
is shown to be close to the loss of the Bayes-optimal, but infeasible
prediction scheme based on . We show that the bounds are tight and that no
other predictor can lead to significantly smaller bounds. Furthermore, for
various performance measures, we show Pareto-optimality of and give an
Occam's razor argument that the choice for the weights
is optimal, where is the length of the shortest program describing
. The results are applied to games of chance, defined as a sequence of
bets, observations, and rewards. The prediction schemes (and bounds) are
compared to the popular predictors based on expert advice. Extensions to
infinite alphabets, partial, delayed and probabilistic prediction,
classification, and more active systems are briefly discussed.Comment: 34 page
Regularization with Approximated Maximum Entropy Method
We tackle the inverse problem of reconstructing an unknown finite measure
from a noisy observation of a generalized moment of defined as the
integral of a continuous and bounded operator with respect to .
When only a quadratic approximation of the operator is known, we
introduce the approximate maximum entropy solution as a minimizer of a
convex functional subject to a sequence of convex constraints. Under several
assumptions on the convex functional, the convergence of the approximate
solution is established and rates of convergence are provided.Comment: 16 page
- …