22,060 research outputs found
Computable randomness is about more than probabilities
We introduce a notion of computable randomness for infinite sequences that
generalises the classical version in two important ways. First, our definition
of computable randomness is associated with imprecise probability models, in
the sense that we consider lower expectations (or sets of probabilities)
instead of classical 'precise' probabilities. Secondly, instead of binary
sequences, we consider sequences whose elements take values in some finite
sample space. Interestingly, we find that every sequence is computably random
with respect to at least one lower expectation, and that lower expectations
that are more informative have fewer computably random sequences. This leads to
the intriguing question whether every sequence is computably random with
respect to a unique most informative lower expectation. We study this question
in some detail and provide a partial answer
Jensen's and Cantelli's Inequalities with Imprecise Previsions
We investigate how basic probability inequalities can be extended to an
imprecise framework, where (precise) probabilities and expectations are
replaced by imprecise probabilities and lower/upper previsions. We focus on
inequalities giving information on a single bounded random variable ,
considering either convex/concave functions of (Jensen's inequalities) or
one-sided bounds such as or (Markov's and Cantelli's
inequalities). As for the consistency of the relevant imprecise uncertainty
measures, our analysis considers coherence as well as weaker requirements,
notably -coherence, which proves to be often sufficient. Jensen-like
inequalities are introduced, as well as a generalisation of a recent
improvement to Jensen's inequality. Some of their applications are proposed:
extensions of Lyapunov's inequality and inferential problems. After discussing
upper and lower Markov's inequalities, Cantelli-like inequalities are proven
with different degrees of consistency for the related lower/upper previsions.
In the case of coherent imprecise previsions, the corresponding Cantelli's
inequalities make use of Walley's lower and upper variances, generally ensuring
better bounds.Comment: Published in Fuzzy Sets and Systems -
https://dx.doi.org/10.1016/j.fss.2022.06.02
Consonant Random Sets: Structure and Properties
Abstract. In this paper, we investigate consonant random sets from the point of view of lattice theory. We introduce a new definition of consonancy and study its relationship with possibility measures as upper probabilities. This allows us to improve a number of results from the literature. Finally, we study the suitability of consonant random sets as models of the imprecise observation of random variables
A unified view of some representations of imprecise probabilities
International audienceSeveral methods for the practical representation of imprecise probabilities exist such as Ferson's p-boxes, possibility distributions, Neumaier's clouds, and random sets . In this paper some relationships existing between the four kinds of representations are discussed. A cloud as well as a p-box can be modelled as a pair of possibility distributions. We show that a generalized form of p-box is a special kind of belief function and also a special kind of cloud
Other uncertainty theories based on capacities
International audienceThe two main uncertainty representations in the literature that tolerate imprecision are possibility distributions and random disjunctive sets. This chapter devotes special attention to the theories that have emerged from them. The first part of the chapter discusses epistemic logic and derives the need for capturing imprecision in information representations. It bridges the gap between uncertainty theories and epistemic logic showing that imprecise probabilities subsume modalities of possibility and necessity as much as probability. The second part presents possibility and evidence theories, their origins, assumptions and semantics, discusses the connections between them and the general framework of imprecise probability. Finally, chapter points out the remaining discrepancies between the different theories regarding various basic notions, such as conditioning, independence or information fusion and the existing bridges between them
On Sharp Identification Regions for Regression Under Interval Data
The reliable analysis of interval data (coarsened data) is one of the
most promising applications of imprecise probabilities in statistics. If one
refrains from making untestable, and often materially unjustified, strong
assumptions on the coarsening process, then the empirical distribution
of the data is imprecise, and statistical models are, in Manski’s terms,
partially identified. We first elaborate some subtle differences between
two natural ways of handling interval data in the dependent variable of
regression models, distinguishing between two different types of identification
regions, called Sharp Marrow Region (SMR) and Sharp Collection
Region (SCR) here. Focusing on the case of linear regression analysis, we
then derive some fundamental geometrical properties of SMR and SCR,
allowing a comparison of the regions and providing some guidelines for
their canonical construction.
Relying on the algebraic framework of adjunctions of two mappings between
partially ordered sets, we characterize SMR as a right adjoint and
as the monotone kernel of a criterion function based mapping, while SCR
is indeed interpretable as the corresponding monotone hull. Finally we
sketch some ideas on a compromise between SMR and SCR based on a
set-domained loss function.
This paper is an extended version of a shorter paper with the same title,
that is conditionally accepted for publication in the Proceedings of
the Eighth International Symposium on Imprecise Probability: Theories
and Applications. In the present paper we added proofs and the seventh
chapter with a small Monte-Carlo-Illustration, that would have made the
original paper too long
- …