16 research outputs found

    Updating beliefs with incomplete observations

    Get PDF
    Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete. This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Grunwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the so-called incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and expectations, as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. We apply the new approach to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, so-called conservative updating rule. In the special case of Bayesian networks constructed using expert knowledge, we provide an exact algorithm for classification based on our updating rule, which has linear-time complexity for a class of networks wider than polytrees. This result is then extended to the more general framework of credal networks, where computations are often much harder than with Bayesian nets. Using an example, we show that our rule appears to provide a solid basis for reliable updating with incomplete observations, when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio

    Le misure di rischio nell’ambito della teoria delle probabilità imprecise

    Get PDF
    2noNell’ambito della finanza matematica hanno di recente riscosso un interesse crescente la ricerca di metodi e lo sviluppo di modelli teorici per la valutazione dei rischi connessi a posizioni finanziarie. Ha così assunto notevole rilievo la nozione di misura di rischio coerente, introdotta da P. Artzner, F. Delbaen, S. Eber e D. Heath in alcuni articoli [1, 2, 5] nei quali tali autori hanno individuato alcuni requisiti ritenuti, a loro giudizio, fondamentali e che ogni misura di rischio dovrebbe ragionevolmente soddisfare. In questo lavoro, dopo aver ricordato tale nozione ed averne illustrato le principali caratteristiche nella Sezione 2, ne viene evidenziata, nella Sezione 3, la stretta connessione con la teoria delle previsioni imprecise, seguendo la linea introdotta in [14]. Vengono successivamente illustrati alcuni problemi rilevanti per la teoria delle misure di rischio coerenti, tra i quali la generalizzazione della nozione di coerenza a spazi di numeri aleatori limitati privi di struttura. Inoltre, qualora una misura non sia coerente, si pu`o porre la necessit`a di determinarne una “correzione”, cio`e di individuare una misura di ±ONVEGNO eCONOMIA E iNCERTEZZA 191 rischio coerente che le sia in qualche modo “vicina”. Analogamente, vi pu`o essere la necessit`a di determinare un’estensione di una misura di rischio coerente che sia definita su un insieme di numeri aleatori non sufficientemente ampio. Questi problemi, e la corrispondente nozione di estensione naturale, vengono affrontati nella Sezione 4. Nella Sezione 5 viene invece illustrata la nozione di misura di rischio convessa, una generalizzazione del concetto di misura di rischio coerente che consente di prendere in considerazione anche il cosiddetto liquidity risk e per la quale si provano, con riferimento alla teoria delle previsioni imprecise, risultati simili a quelli ottenuti per le misure coerenti. Nella Sezione 6 vengono infine fornite alcune indicazioni su ulteriori sviluppi e su alcuni modelli specifici nei quali la teoria della previsioni imprecise viene impiegata nella misurazione del rischio.nonemixedPelessoni R.; Vicig P.Pelessoni, Renato; Vicig, Paol

    Learning from samples using coherent lower previsions

    Get PDF
    Het hoofdonderwerp van dit werk is het afleiden, voorstellen en bestuderen van voorspellende en parametrische gevolgtrekkingsmodellen die gebaseerd zijn op de theorie van coherente onderprevisies. Een belangrijk nevenonderwerp is het vinden en bespreken van extreme onderwaarschijnlijkheden. In het hoofdstuk ‘Modeling uncertainty’ geef ik een inleidend overzicht van de theorie van coherente onderprevisies ─ ook wel theorie van imprecieze waarschijnlijkheden genoemd ─ en de ideeĂ«n waarop ze gestoeld is. Deze theorie stelt ons in staat onzekerheid expressiever ─ en voorzichtiger ─ te beschrijven. Dit overzicht is origineel in de zin dat ze meer dan andere inleidingen vertrekt van de intuitieve theorie van coherente verzamelingen van begeerlijke gokken. Ik toon in het hoofdstuk ‘Extreme lower probabilities’ hoe we de meest extreme vormen van onzekerheid kunnen vinden die gemodelleerd kunnen worden met onderwaarschijnlijkheden. Elke andere onzekerheidstoestand beschrijfbaar met onderwaarschijnlijkheden kan geformuleerd worden in termen van deze extreme modellen. Het belang van de door mij bekomen en uitgebreid besproken resultaten in dit domein is voorlopig voornamelijk theoretisch. Het hoofdstuk ‘Inference models’ behandelt leren uit monsters komende uit een eindige, categorische verzameling. De belangrijkste basisveronderstelling die ik maak is dat het bemonsteringsproces omwisselbaar is, waarvoor ik een nieuwe definitie geef in termen van begeerlijke gokken. Mijn onderzoek naar de gevolgen van deze veronderstelling leidt ons naar enkele belangrijke representatiestellingen: onzekerheid over (on)eindige rijen monsters kan gemodelleerd worden in termen van categorie-aantallen (-frequenties). Ik bouw hier op voort om voor twee populaire gevolgtrekkingsmodellen voor categorische data ─ het voorspellende imprecies Dirichlet-multinomiaalmodel en het parametrische imprecies Dirichletmodel ─ een verhelderende afleiding te geven, louter vertrekkende van enkele grondbeginselen; deze modellen pas ik toe op speltheorie en het leren van Markov-ketens. In het laatste hoofdstuk, ‘Inference models for exponential families’, verbreed ik de blik tot niet-categorische exponentiĂ«le-familie-bemonsteringsmodellen; voorbeelden zijn normale bemonstering en Poisson-bemonstering. Eerst onderwerp ik de exponentiĂ«le families en de aanverwante toegevoegde parametrische en voorspellende previsies aan een grondig onderzoek. Deze aanverwante previsies worden gebruikt in de klassieke Bayesiaanse gevolgtrekkingsmodellen gebaseerd op toegevoegd updaten. Ze dienen als grondslag voor de nieuwe, door mij voorgestelde imprecieze-waarschijnlijkheidsgevolgtrekkingsmodellen. In vergelijking met de klassieke Bayesiaanse aanpak, laat de mijne toe om voorzichtiger te zijn bij de beschrijving van onze kennis over het bemonsteringsmodel; deze voorzichtigheid wordt weerspiegeld door het op deze modellen gebaseerd gedrag (getrokken besluiten, gemaakte voorspellingen, genomen beslissingen). Ik toon ten slotte hoe de voorgestelde gevolgtrekkingsmodellen gebruikt kunnen worden voor classificatie door de naĂŻeve credale classificator.This thesis's main subject is deriving, proposing, and studying predictive and parametric inference models that are based on the theory of coherent lower previsions. One important side subject also appears: obtaining and discussing extreme lower probabilities. In the chapter ‘Modeling uncertainty’, I give an introductory overview of the theory of coherent lower previsions ─ also called the theory of imprecise probabilities ─ and its underlying ideas. This theory allows us to give a more expressive ─ and a more cautious ─ description of uncertainty. This overview is original in the sense that ─ more than other introductions ─ it is based on the intuitive theory of coherent sets of desirable gambles. I show in the chapter ‘Extreme lower probabilities’ how to obtain the most extreme forms of uncertainty that can be modeled using lower probabilities. Every other state of uncertainty describable by lower probabilities can be formulated in terms of these extreme ones. The importance of the results in this area obtained and extensively discussed by me is currently mostly theoretical. The chapter ‘Inference models’ treats learning from samples from a finite, categorical space. My most basic assumption about the sampling process is that it is exchangeable, for which I give a novel definition in terms of desirable gambles. My investigation of the consequences of this assumption leads us to some important representation theorems: uncertainty about (in)finite sample sequences can be modeled entirely in terms of category counts (frequencies). I build on this to give an elucidating derivation from first principles for two popular inference models for categorical data ─ the predictive imprecise Dirichlet-multinomial model and the parametric imprecise Dirichlet model; I apply these models to game theory and learning Markov chains. In the last chapter, ‘Inference models for exponential families’, I enlarge the scope to exponential family sampling models; examples are normal sampling and Poisson sampling. I first thoroughly investigate exponential families and the related conjugate parametric and predictive previsions used in classical Bayesian inference models based on conjugate updating. These previsions serve as a basis for the new imprecise-probabilistic inference models I propose. Compared to the classical Bayesian approach, mine allows to be much more cautious when trying to express what we know about the sampling model; this caution is reflected in behavior (conclusions drawn, predictions made, decisions made) based on these models. Lastly, I show how the proposed inference models can be used for classification with the naive credal classifier

    An operational approach to graphical uncertainty modelling

    Get PDF

    Discrete time models for bid-ask pricing under Dempster-Shafer uncertainty

    Get PDF
    As is well-known, real financial markets depart from simplifying hypotheses of classical no-arbitrage pricing theory. In particular, they show the presence of frictions in the form of bid-ask spread. For this reason, the aim of the thesis is to provide a model able to manage these situations, relying on a non-linear pricing rule defined as (discounted) Choquet integral with respect to a belief function. Under the partially resolving uncertainty principle, we generalize the first fundamental theorem of asset pricing in the context of belief functions. Furthermore, we show that a generalized arbitrage-free lower pricing rule can be characterized as a (discounted) Choquet expectation with respect to an equivalent inner approximating (one-step) Choquet martingale belief function. Then, we generalize the Choquet pricing rule dinamically: we characterize a reference belief function such that a multiplicative binomial process satisfies a suitable version of time-homogeneity and Markov properties and we derive the induced conditional Choquet expectation operator. In a multi-period market with a risky asset admitting bid-ask spread, we assume that its lower price process is modeled by the proposed time-homogeneous Markov multiplicative binomial process. Here, we generalize the theorem of change of measure, proving the existence of an equivalent one-step Choquet martingale belief function. Then, we prove that the (discounted) lower price process of a European derivative is a one-step Choquet martingale and a k-step Choquet super-martingale, for k ≄ 2

    Addressing ambiguity in randomized reinsurance stop-loss treaties using belief functions

    Get PDF
    The aim of the paper is to model ambiguity in a randomized reinsurance stop-loss treaty. For this, we consider the lower envelope of the set of bivariate joint probability distributions having a precise discrete marginal and an ambiguous Bernoulli marginal. Under an independence assumption, since the lower envelope fails 2-monotonicity, inner/outer Dempster-Shafer approximations are considered, so as to select the optimal retention level by maximizing the lower expected insurer's annual profit under reinsurance. We show that the inner approximation is not suitable in the reinsurance problem, while the outer approximation preserves the given marginal information, weakens the independence assumption, and does not introduce spurious information in the retention level selection problem. Finally, we provide a characterization of the optimal retention level
    corecore