10,373 research outputs found

    A Cancellation Law for Probabilistic Processes

    Get PDF
    We show a cancellation property for probabilistic choice. If distributions mu + rho and nu + rho are branching probabilistic bisimilar, then distributions mu and nu are also branching probabilistic bisimilar. We do this in the setting of a basic process language involving non-deterministic and probabilistic choice and define branching probabilistic bisimilarity on distributions. Despite the fact that the cancellation property is very elegant and concise, we failed to provide a short and natural combinatorial proof. Instead we provide a proof using metric topology. Our major lemma is that every distribution can be unfolded into an equivalent stable distribution, where the topological arguments are required to deal with uncountable branching.Comment: In Proceedings EXPRESS/SOS2023, arXiv:2309.0578

    Local tomography and the Jordan structure of quantum theory

    Get PDF
    Using a result of H. Hanche-Olsen, we show that (subject to fairly natural constraints on what constitutes a system, and on what constitutes a composite system), orthodox finite-dimensional complex quantum mechanics with superselection rules is the only non-signaling probabilistic theory in which (i) individual systems are Jordan algebras (equivalently, their cones of unnormalized states are homogeneous and self-dual), (ii) composites are locally tomographic (meaning that states are determined by the joint probabilities they assign to measurement outcomes on the component systems) and (iii) at least one system has the structure of a qubit. Using this result, we also characterize finite dimensional quantum theory among probabilistic theories having the structure of a dagger-monoidal category

    Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet

    Full text link
    Various optimality properties of universal sequence predictors based on Bayes-mixtures in general, and Solomonoff's prediction scheme in particular, will be studied. The probability of observing xtx_t at time tt, given past observations x1...xt−1x_1...x_{t-1} can be computed with the chain rule if the true generating distribution μ\mu of the sequences x1x2x3...x_1x_2x_3... is known. If μ\mu is unknown, but known to belong to a countable or continuous class \M one can base ones prediction on the Bayes-mixture ξ\xi defined as a wνw_\nu-weighted sum or integral of distributions \nu\in\M. The cumulative expected loss of the Bayes-optimal universal prediction scheme based on ξ\xi is shown to be close to the loss of the Bayes-optimal, but infeasible prediction scheme based on μ\mu. We show that the bounds are tight and that no other predictor can lead to significantly smaller bounds. Furthermore, for various performance measures, we show Pareto-optimality of ξ\xi and give an Occam's razor argument that the choice wν∼2−K(ν)w_\nu\sim 2^{-K(\nu)} for the weights is optimal, where K(ν)K(\nu) is the length of the shortest program describing ν\nu. The results are applied to games of chance, defined as a sequence of bets, observations, and rewards. The prediction schemes (and bounds) are compared to the popular predictors based on expert advice. Extensions to infinite alphabets, partial, delayed and probabilistic prediction, classification, and more active systems are briefly discussed.Comment: 34 page

    Regularization with Approximated L2L^2 Maximum Entropy Method

    Get PDF
    We tackle the inverse problem of reconstructing an unknown finite measure μ\mu from a noisy observation of a generalized moment of μ\mu defined as the integral of a continuous and bounded operator Φ\Phi with respect to μ\mu. When only a quadratic approximation Φm\Phi_m of the operator is known, we introduce the L2L^2 approximate maximum entropy solution as a minimizer of a convex functional subject to a sequence of convex constraints. Under several assumptions on the convex functional, the convergence of the approximate solution is established and rates of convergence are provided.Comment: 16 page
    • …
    corecore