45,851 research outputs found
Improving randomness characterization through Bayesian model selection
Nowadays random number generation plays an essential role in technology with
important applications in areas ranging from cryptography, which lies at the
core of current communication protocols, to Monte Carlo methods, and other
probabilistic algorithms. In this context, a crucial scientific endeavour is to
develop effective methods that allow the characterization of random number
generators. However, commonly employed methods either lack formality (e.g. the
NIST test suite), or are inapplicable in principle (e.g. the characterization
derived from the Algorithmic Theory of Information (ATI)). In this letter we
present a novel method based on Bayesian model selection, which is both
rigorous and effective, for characterizing randomness in a bit sequence. We
derive analytic expressions for a model's likelihood which is then used to
compute its posterior probability distribution. Our method proves to be more
rigorous than NIST's suite and the Borel-Normality criterion and its
implementation is straightforward. We have applied our method to an
experimental device based on the process of spontaneous parametric
downconversion, implemented in our laboratory, to confirm that it behaves as a
genuine quantum random number generator (QRNG). As our approach relies on
Bayesian inference, which entails model generalizability, our scheme transcends
individual sequence analysis, leading to a characterization of the source of
the random sequences itself.Comment: 25 page
Reconceptualizing the Burden of Proof
The preponderance standard is conventionally described as an absolute probability threshold of 0.5. This Essay argues that this absolute characterization of the burden of proof is wrong. Rather than focusing on an absolute threshold, the Essay reconceptualizes the preponderance standard as a probability ratio and shows how doing so eliminates many of the classical problems associated with probabilistic theories of evidence. Using probability ratios eliminates the so-called Conjunction Paradox, and developing the ratio tests under a Bayesian perspective further explains the Blue Bus problem and other puzzles surrounding statistical evidence. By harmonizing probabilistic theories of proof with recent critiques advocating for abductive models (inference to the best explanation), the Essay bridges a contentious rift in current evidence scholarship
Efficient algorithms for conditional independence inference
The topic of the paper is computer testing of (probabilistic) conditional independence (CI) implications by an algebraic method of structural imsets. The basic idea is to transform (sets of) CI statements into certain integral vectors and to verify by a computer the corresponding algebraic relation between the vectors, called the independence implication. We interpret the previous methods for computer testing of this implication from the point of view of polyhedral geometry. However, the main contribution of the paper is a new method, based on linear programming (LP). The new method overcomes the limitation of former methods to the number of involved variables. We recall/describe the theoretical basis for all four methods involved in our computational experiments, whose aim was to compare the efficiency of the algorithms. The experiments show that the LP method is clearly the fastest one. As an example of possible application of such algorithms we show that testing inclusion of Bayesian network structures or whether a CI statement is encoded in an acyclic directed graph can be done by the algebraic method
On compound and iterated conditionals
We illustrate the notions of compound and iterated conditionals introduced, in recent papers, as suitable conditional random quantities, in the framework of coherence. We motivate our definitions by examining some concrete examples. Our logical operations among conditional events satisfy the basic probabilistic properties valid for unconditional events. We show that some, intuitively acceptable, compound sentences on conditionals can be analyzed in a rigorous way in terms of suitable iterated conditionals. We discuss the Import-Export principle, which is not valid in our approach, by also examining the inference from a material conditional to the associated conditional event. Then, we illustrate the characterization, in terms of iterated conditionals, of some well known p-valid and non p-valid inference rules
A Complete Characterization of Projectivity for Statistical Relational Models
A generative probabilistic model for relational data consists of a family of
probability distributions for relational structures over domains of different
sizes. In most existing statistical relational learning (SRL) frameworks, these
models are not projective in the sense that the marginal of the distribution
for size- structures on induced sub-structures of size is equal to the
given distribution for size- structures. Projectivity is very beneficial in
that it directly enables lifted inference and statistically consistent learning
from sub-sampled relational structures. In earlier work some simple fragments
of SRL languages have been identified that represent projective models.
However, no complete characterization of, and representation framework for
projective models has been given. In this paper we fill this gap: exploiting
representation theorems for infinite exchangeable arrays we introduce a class
of directed graphical latent variable models that precisely correspond to the
class of projective relational models. As a by-product we also obtain a
characterization for when a given distribution over size- structures is the
statistical frequency distribution of size- sub-structures in much larger
size- structures. These results shed new light onto the old open problem of
how to apply Halpern et al.'s "random worlds approach" for probabilistic
inference to general relational signatures.Comment: Extended version (with proof appendix) of paper that is too appear in
Proceedings of IJCAI 202
- …