2,143 research outputs found
Outer Approximations of Coherent Lower Probabilities Using Belief Functions
We investigate the problem of outer approximating a coherent lower probability with a more tractable model. In particular, in this work we focus on the outer approximations made by belief functions. We show that they can be obtained by solving a linear programming problem. In addition, we consider the subfamily of necessity measures, and show that in that case we can determine all the undominated outer approximations in a simple manner
UNIFYING PRACTICAL UNCERTAINTY REPRESENTATIONS: I. GENERALIZED P-BOXES
Pre-print of final version.International audienceThere exist several simple representations of uncertainty that are easier to handle than more general ones. Among them are random sets, possibility distributions, probability intervals, and more recently Ferson's p-boxes and Neumaier's clouds. Both for theoretical and practical considerations, it is very useful to know whether one representation is equivalent to or can be approximated by other ones. In this paper, we define a generalized form of usual p-boxes. These generalized p-boxes have interesting connections with other previously known representations. In particular, we show that they are equivalent to pairs of possibility distributions, and that they are special kinds of random sets. They are also the missing link between p-boxes and clouds, which are the topic of the second part of this study
Evaluating uncertainty with Vertical Barrier Models
Vertical Barrier Models (VBM) are a family of imprecise probability models that generalise a number of well known distortion/neighbourhood models (such as the Pari-Mutuel Model, the Linear-Vacuous Model, and others) while still being relatively simple. Several of their properties were established in previous works; in this paper we explore, in a finite framework, further facets of these models: their interpretation as neighbourhood models, the structure of their credal set in terms of maximum number of its extreme points, the result of merging operations with VBMs, the properties of their mass function, the conditions for VBMs to be belief functions or maxitive measures and the approximation of other models by VBMs
Addressing ambiguity in randomized reinsurance stop-loss treaties using belief functions
The aim of the paper is to model ambiguity in a randomized reinsurance stop-loss treaty. For this, we consider the lower envelope of the set of bivariate joint probability distributions having a precise discrete marginal and an ambiguous Bernoulli marginal. Under an independence assumption, since the lower envelope fails 2-monotonicity, inner/outer Dempster-Shafer approximations are considered, so as to select the optimal retention level by maximizing the lower expected insurer's annual profit under reinsurance. We show that the inner approximation is not suitable in the reinsurance problem, while the outer approximation preserves the given marginal information, weakens the independence assumption, and does not introduce spurious information in the retention level selection problem. Finally, we provide a characterization of the optimal retention level
Discrete time models for bid-ask pricing under Dempster-Shafer uncertainty
As is well-known, real financial markets depart from simplifying hypotheses of classical no-arbitrage pricing theory. In particular, they show the presence of frictions in the form of bid-ask spread. For this reason, the aim of the thesis is to provide a model able to manage these situations, relying on a non-linear pricing rule defined as (discounted) Choquet integral with respect to a belief function. Under the partially resolving uncertainty principle, we generalize the first fundamental theorem of asset pricing in the context of belief functions. Furthermore, we show that a generalized arbitrage-free lower pricing rule can be characterized as a (discounted) Choquet expectation with respect to an equivalent inner approximating (one-step) Choquet martingale belief function. Then, we generalize the Choquet pricing rule dinamically: we characterize a reference belief function such that a multiplicative binomial process satisfies a suitable version of time-homogeneity and Markov properties and we derive the induced conditional Choquet expectation operator. In a multi-period market with a risky asset admitting bid-ask spread, we assume that its lower price process is modeled by the proposed time-homogeneous Markov multiplicative binomial process. Here, we generalize the theorem of change of measure, proving the existence of an equivalent one-step Choquet martingale belief function. Then, we prove that the (discounted) lower price process of a European derivative is a one-step Choquet martingale and a k-step Choquet super-martingale, for k ≥ 2
Special Cases
International audienceThis chapter reviews special cases of lower previsions, that are instrumental in practical applications. We emphasize their various advantages and drawbacks, as well as the kind of problems in which they can be the most useful
A geometric and game-theoretic study of the conjunction of possibility measures
In this paper, we study the conjunction of possibility measures when they are interpreted as coherent upper probabilities, that is, as upper bounds for some set of probability measures. We identify conditions under which the minimum of two possibility measures remains a possibility measure. We provide graphical way to check these conditions, by means of a zero-sum game formulation of the problem. This also gives us a nice way to adjust the initial possibility measures so their minimum is guaranteed to be a possibility measure. Finally, we identify conditions under which the minimum of two possibility measures is a coherent upper probability, or in other words, conditions under which the minimum of two possibility measures is an exact upper bound for the intersection of the credal sets of those two possibility measures
Valid and efficient imprecise-probabilistic inference with partial priors, II. General framework
Bayesian inference requires specification of a single, precise prior
distribution, whereas frequentist inference only accommodates a vacuous prior.
Since virtually every real-world application falls somewhere in between these
two extremes, a new approach is needed. This series of papers develops a new
framework that provides valid and efficient statistical inference, prediction,
etc., while accommodating partial prior information and imprecisely-specified
models more generally. This paper fleshes out a general inferential model
construction that not only yields tests, confidence intervals, etc.~with
desirable error rate control guarantees, but also facilitates valid
probabilistic reasoning with de~Finetti-style no-sure-loss guarantees. The key
technical novelty here is a so-called outer consonant approximation of a
general imprecise probability which returns a data- and partial prior-dependent
possibility measure to be used for inference and prediction. Despite some
potentially unfamiliar imprecise-probabilistic concepts in the development, the
result is an intuitive, likelihood-driven framework that will, as expected,
agree with the familiar Bayesian and frequentist solutions in the respective
extreme cases. More importantly, the proposed framework accommodates partial
prior information where available and, therefore, leads to new solutions that
were previously out of reach for both Bayesians and frequentists. Details are
presented here for a wide range of examples, with more practical details to
come in later installments.Comment: Follow-up to arXiv:2203.06703. Feedback welcome at
https://researchers.one/articles/22.11.0000
- …