1,133 research outputs found

    The Goodman-Nguyen Relation within Imprecise Probability Theory

    Full text link
    The Goodman-Nguyen relation is a partial order generalising the implication (inclusion) relation to conditional events. As such, with precise probabilities it both induces an agreeing probability ordering and is a key tool in a certain common extension problem. Most previous work involving this relation is concerned with either conditional event algebras or precise probabilities. We investigate here its role within imprecise probability theory, first in the framework of conditional events and then proposing a generalisation of the Goodman-Nguyen relation to conditional gambles. It turns out that this relation induces an agreeing ordering on coherent or C-convex conditional imprecise previsions. In a standard inferential problem with conditional events, it lets us determine the natural extension, as well as an upper extension. With conditional gambles, it is useful in deriving a number of inferential inequalities.Comment: Published version: http://www.sciencedirect.com/science/article/pii/S0888613X1400101

    2-coherent and 2-convex Conditional Lower Previsions

    Get PDF
    In this paper we explore relaxations of (Williams) coherent and convex conditional previsions that form the families of nn-coherent and nn-convex conditional previsions, at the varying of nn. We investigate which such previsions are the most general one may reasonably consider, suggesting (centered) 22-convex or, if positive homogeneity and conjugacy is needed, 22-coherent lower previsions. Basic properties of these previsions are studied. In particular, we prove that they satisfy the Generalized Bayes Rule and always have a 22-convex or, respectively, 22-coherent natural extension. The role of these extensions is analogous to that of the natural extension for coherent lower previsions. On the contrary, nn-convex and nn-coherent previsions with n3n\geq 3 either are convex or coherent themselves or have no extension of the same type on large enough sets. Among the uncertainty concepts that can be modelled by 22-convexity, we discuss generalizations of capacities and niveloids to a conditional framework and show that the well-known risk measure Value-at-Risk only guarantees to be centered 22-convex. In the final part, we determine the rationality requirements of 22-convexity and 22-coherence from a desirability perspective, emphasising how they weaken those of (Williams) coherence.Comment: This is the authors' version of a work that was accepted for publication in the International Journal of Approximate Reasoning, vol. 77, October 2016, pages 66-86, doi:10.1016/j.ijar.2016.06.003, http://www.sciencedirect.com/science/article/pii/S0888613X1630079

    Precise Propagation of Upper and Lower Probability Bounds in System P

    Full text link
    In this paper we consider the inference rules of System P in the framework of coherent imprecise probabilistic assessments. Exploiting our algorithms, we propagate the lower and upper probability bounds associated with the conditional assertions of a given knowledge base, automatically obtaining the precise probability bounds for the derived conclusions of the inference rules. This allows a more flexible and realistic use of System P in default reasoning and provides an exact illustration of the degradation of the inference rules when interpreted in probabilistic terms. We also examine the disjunctive Weak Rational Monotony of System P+ proposed by Adams in his extended probability logic.Comment: 8 pages -8th Intl. Workshop on Non-Monotonic Reasoning NMR'2000, April 9-11, Breckenridge, Colorad

    Weak consistency for imprecise conditional previsions

    Get PDF
    In this paper we explore relaxations of (Williams) coherent and convex conditional previsions that form the families of n-coherent and n-convex conditional previsions, at the varying of n. We investigate which such previsions are the most general one may reasonably consider, suggesting (centered) 2-convex or, if positive homogeneity and conjugacy is needed, 2-coherent lower previsions. Basic properties of these previsions are studied. In particular, centered 2-convex previsions satisfy the Generalized Bayes Rule and always have a 2-convex natural extension. We discuss then the rationality requirements of 2-convexity and 2-coherence from a desirability perspective. Among the uncertainty concepts that can be modelled by 2-convexity, we mention generalizations of capacities and niveloids to a conditional framework

    A Generalized Notion of Conjunction for Two Conditional Events

    Get PDF
    Traditionally the conjunction of conditional events has been defined as a three-valued object. However, in this way classical logical and probabilistic properties are not preserved. In recent literature, a notion of conjunction of two conditional events as a five-valued object satisfying classical probabilistic properties has been deepened in the setting of coherence. In this framework the conjunction of (A|H) \wedge (B|K) is defined as a conditional random quantity with set of possible values {1,0,x,y,z}, where x=P(A|H), y=P(B|K), and z is the prevision of (A|H) & (B|K). In this paper we propose a generalization of this object, denoted by (A|H) \wedge_{a,b} (B|K), where the values x and y are replaced by two arbitrary values a,b in [0,1]. Then, by means of a geometrical approach, we compute the set of all coherent assessments on the family {A|H,B|K,(A|H) &_{a,b} (B|K)}, by also showing that in the general case the Fréchet-Hoeffding bounds for the conjunction are not satisfied. We also analyze some particular cases. Finally, we study coherence in the imprecise case of an interval-valued probability assessment and we consider further aspects on (A|H) &_{a,b} (B|K)

    Quasi Conjunction, Quasi Disjunction, T-norms and T-conorms: Probabilistic Aspects

    Full text link
    We make a probabilistic analysis related to some inference rules which play an important role in nonmonotonic reasoning. In a coherence-based setting, we study the extensions of a probability assessment defined on nn conditional events to their quasi conjunction, and by exploiting duality, to their quasi disjunction. The lower and upper bounds coincide with some well known t-norms and t-conorms: minimum, product, Lukasiewicz, and Hamacher t-norms and their dual t-conorms. On this basis we obtain Quasi And and Quasi Or rules. These are rules for which any finite family of conditional events p-entails the associated quasi conjunction and quasi disjunction. We examine some cases of logical dependencies, and we study the relations among coherence, inclusion for conditional events, and p-entailment. We also consider the Or rule, where quasi conjunction and quasi disjunction of premises coincide with the conclusion. We analyze further aspects of quasi conjunction and quasi disjunction, by computing probabilistic bounds on premises from bounds on conclusions. Finally, we consider biconditional events, and we introduce the notion of an nn-conditional event. Then we give a probabilistic interpretation for a generalized Loop rule. In an appendix we provide explicit expressions for the Hamacher t-norm and t-conorm in the unitary hypercube

    Contributions to reasoning on imprecise data

    Get PDF
    This thesis contains four contributions which advocate cautious statistical modelling and inference. They achieve it by taking sets of models into account, either directly or indirectly by looking at compatible data situations. Special care is taken to avoid assumptions which are technically convenient, but reduce the uncertainty involved in an unjustified manner. This thesis provides methods for cautious statistical modelling and inference, which are able to exhaust the potential of precise and vague data, motivated by different fields of application, ranging from political science to official statistics. At first, the inherently imprecise Nonparametric Predictive Inference model is involved in the cautious selection of splitting variables in the construction of imprecise classification trees, which are able to describe a structure and allow for a reasonably high predictive power. Dependent on the interpretation of vagueness, different strategies for vague data are then discussed in terms of finite random closed sets: On the one hand, the data to be analysed are regarded as set-valued answers of an item in a questionnaire, where each possible answer corresponding to a subset of the sample space is interpreted as a separate entity. By this the finite random set is reduced to an (ordinary) random variable on a transformed sample space. The context of application is the analysis of voting intentions, where it is shown that the presented approach is able to characterise the undecided in a more detailed way, which common approaches are not able to. Altough the presented analysis, regarded as a first step, is carried out on set-valued data, which are suitably self-constructed with respect to the scientific research question, it still clearly demonstrates that the full potential of this quite general framework is not exhausted. It is capable of dealing with more complex applications. On the other hand, the vague data are produced by set-valued single imputation (imprecise imputation) where the finite random sets are interpreted as being the result of some (unspecified) coarsening. The approach is presented within the context of statistical matching, which is used to gain joint knowledge on features that were not jointly collected in the initial data production. This is especially relevant in data production, e.g. in official statistics, as it allows to fuse the information of already accessible data sets into a new one, without the requirement of actual data collection in the field. Finally, in order to share data, they need to be suitably anonymised. For the specific class of anonymisation techniques of microaggregation, its ability to infer on generalised linear regression models is evaluated. Therefore, the microaggregated data are regarded as a set of compatible, unobserved underlying data situations. Two strategies to follow are proposed. At first, a maximax-like optimisation strategy is pursued, in which the underlying unobserved data are incorporated into the regression model as nuisance parameters, providing a concise yet over-optimistic estimation of the regression coefficients. Secondly, an approach in terms of partial identification, which is inherently more cautious than the previous one, is applied to estimate the set of all regression coefficients that are obtained by performing the estimation on each compatible data situation. Vague data are deemed favourable to precise data as they additionally encompass the uncertainty of the individual observation, and therefore they have a higher informational value. However, to the present day, there are few (credible) statistical models that are able to deal with vague or set-valued data. For this reason, the collection of such data is neglected in data production, disallowing such models to exhaust their full potential. This in turn prevents a throughout evaluation, negatively affecting the (further) development of such models. This situation is a variant of the chicken or egg dilemma. The ambition of this thesis is to break this cycle by providing actual methods for dealing with vague data in relevant situations in practice, to stimulate the required data production.Diese Schrift setzt sich in vier Beiträgen für eine vorsichtige statistische Modellierung und Inferenz ein. Dieses wird erreicht, indem man Mengen von Modellen betrachtet, entweder direkt oder indirekt über die Interpretation der Daten als Menge zugrunde liegender Datensituationen. Besonderer Wert wird dabei darauf gelegt, Annahmen zu vermeiden, die zwar technisch bequem sind, aber die zugrunde liegende Unsicherheit der Daten in ungerechtfertigter Weise reduzieren. In dieser Schrift werden verschiedene Methoden der vorsichtigen Modellierung und Inferenz vorgeschlagen, die das Potential von präzisen und unscharfen Daten ausschöpfen können, angeregt von unterschiedlichen Anwendungsbereichen, die von Politikwissenschaften bis zur amtlichen Statistik reichen. Zuerst wird das Modell der Nonparametrischen Prädiktiven Inferenz, welches per se unscharf ist, in der vorsichtigen Auswahl von Split-Variablen bei der Erstellung von Klassifikationsbäumen verwendet, die auf Methoden der Imprecise Probabilities fußen. Diese Bäume zeichnen sich dadurch aus, dass sie sowohl eine Struktur beschreiben, als auch eine annehmbar hohe Prädiktionsgüte aufweisen. In Abhängigkeit von der Interpretation der Unschärfe, werden dann verschiedene Strategien für den Umgang mit unscharfen Daten im Rahmen von finiten Random Sets erörtert. Einerseits werden die zu analysierenden Daten als mengenwertige Antwort auf eine Frage in einer Fragebogen aufgefasst. Hierbei wird jede mögliche (multiple) Antwort, die eine Teilmenge des Stichprobenraumes darstellt, als eigenständige Entität betrachtet. Somit werden die finiten Random Sets auf (gewöhnliche) Zufallsvariablen reduziert, die nun in einen transformierten Raum abbilden. Im Rahmen einer Analyse von Wahlabsichten hat der vorgeschlagene Ansatz gezeigt, dass die Unentschlossenen mit ihm genauer charakterisiert werden können, als es mit den gängigen Methoden möglich ist. Obwohl die vorgestellte Analyse, betrachtet als ein erster Schritt, auf mengenwertige Daten angewendet wird, die vor dem Hintergrund der wissenschaftlichen Forschungsfrage in geeigneter Weise selbst konstruiert worden sind, zeigt diese dennoch klar, dass die Möglichkeiten dieses generellen Ansatzes nicht ausgeschöpft sind, so dass er auch in komplexeren Situationen angewendet werden kann. Andererseits werden unscharfe Daten durch eine mengenwertige Einfachimputation (imprecise imputation) erzeugt. Hier werden die finiten Random Sets als Ergebnis einer (unspezifizierten) Vergröberung interpretiert. Der Ansatz wird im Rahmen des Statistischen Matchings vorgeschlagen, das verwendet wird, um gemeinsame Informationen über ursprünglich nicht zusammen erhobene Merkmale zur erhalten. Dieses ist insbesondere relevant bei der Datenproduktion, beispielsweise in der amtlichen Statistik, weil es erlaubt, die verschiedenartigen Informationen aus unterschiedlichen bereits vorhandenen Datensätzen zu einen neuen Datensatz zu verschmelzen, ohne dass dafür tatsächlich Daten neu erhoben werden müssen. Zudem müssen die Daten für den Datenaustausch in geeigneter Weise anonymisiert sein. Für die spezielle Klasse der Anonymisierungstechnik der Mikroaggregation wird ihre Eignung im Hinblick auf die Verwendbarkeit in generalisierten linearen Regressionsmodellen geprüft. Hierfür werden die mikroaggregierten Daten als eine Menge von möglichen, unbeobachtbaren zu Grunde liegenden Datensituationen aufgefasst. Es werden zwei Herangehensweisen präsentiert: Als Erstes wird eine maximax-ähnliche Optimisierungsstrategie verfolgt, dabei werden die zu Grunde liegenden unbeobachtbaren Daten als Nuisance Parameter in das Regressionsmodell aufgenommen, was eine enge, aber auch über-optimistische Schätzung der Regressionskoeffizienten liefert. Zweitens wird ein Ansatz im Sinne der partiellen Identifikation angewendet, der per se schon vorsichtiger ist (als der vorherige), indem er nur die Menge aller möglichen Regressionskoeffizienten schätzt, die erhalten werden können, wenn die Schätzung auf jeder zu Grunde liegenden Datensituation durchgeführt wird. Unscharfe Daten haben gegenüber präzisen Daten den Vorteil, dass sie zusätzlich die Unsicherheit der einzelnen Beobachtungseinheit umfassen. Damit besitzen sie einen höheren Informationsgehalt. Allerdings gibt es zur Zeit nur wenige glaubwürdige statistische Modelle, die mit unscharfen Daten umgehen können. Von daher wird die Erhebung solcher Daten bei der Datenproduktion vernachlässigt, was dazu führt, dass entsprechende statistische Modelle ihr volles Potential nicht ausschöpfen können. Dies verhindert eine vollumfängliche Bewertung, wodurch wiederum die (Weiter-)Entwicklung jener Modelle gehemmt wird. Dies ist eine Variante des Henne-Ei-Problems. Diese Schrift will durch Vorschlag konkreter Methoden hinsichtlich des Umgangs mit unscharfen Daten in relevanten Anwendungssituationen Lösungswege aus der beschriebenen Situation aufzeigen und damit die entsprechende Datenproduktion anregen

    Probabilistic entailment and iterated conditionals

    Full text link
    In this paper we exploit the notions of conjoined and iterated conditionals, which are defined in the setting of coherence by means of suitable conditional random quantities with values in the interval [0,1][0,1]. We examine the iterated conditional (BK)(AH)(B|K)|(A|H), by showing that AHA|H p-entails BKB|K if and only if (BK)(AH)=1(B|K)|(A|H) = 1. Then, we show that a p-consistent family F={E1H1,E2H2}\mathcal{F}=\{E_1|H_1,E_2|H_2\} p-entails a conditional event E3H3E_3|H_3 if and only if E3H3=1E_3|H_3=1, or (E3H3)QC(S)=1(E_3|H_3)|QC(\mathcal{S})=1 for some nonempty subset S\mathcal{S} of F\mathcal{F}, where QC(S)QC(\mathcal{S}) is the quasi conjunction of the conditional events in S\mathcal{S}. Then, we examine the inference rules AndAnd, CutCut, CautiousCautious MonotonicityMonotonicity, and OrOr of System~P and other well known inference rules (ModusModus PonensPonens, ModusModus TollensTollens, BayesBayes). We also show that QC(F)C(F)=1QC(\mathcal{F})|\mathcal{C}(\mathcal{F})=1, where C(F)\mathcal{C}(\mathcal{F}) is the conjunction of the conditional events in F\mathcal{F}. We characterize p-entailment by showing that F\mathcal{F} p-entails E3H3E_3|H_3 if and only if (E3H3)C(F)=1(E_3|H_3)|\mathcal{C}(\mathcal{F})=1. Finally, we examine \emph{Denial of the antecedent} and \emph{Affirmation of the consequent}, where the p-entailment of (E3H3)(E_3|H_3) from F\mathcal{F} does not hold, by showing that $(E_3|H_3)|\mathcal{C}(\mathcal{F})\neq1.

    Function Approximation Using Probabilistic Fuzzy Systems

    Get PDF
    We consider function approximation by fuzzy systems. Fuzzy systems are typically used for approximating deterministic functions, in which the stochastic uncertainty is ignored. We propose probabilistic fuzzy systems i

    Subjective probability, trivalent logics and compound conditionals

    Full text link
    In this work we first illustrate the subjective theory of de Finetti. We recall the notion of coherence for both the betting scheme and the penalty criterion, by considering the unconditional and conditional cases. We show the equivalence of the two criteria by giving the geometrical interpretation of coherence. We also consider the notion of coherence based on proper scoring rules. We discuss conditional events in the trivalent logic of de Finetti and the numerical representation of truth-values. We check the validity of selected basic logical and probabilistic properties for some trivalent logics: Kleene-Lukasiewicz-Heyting-de Finetti; Lukasiewicz; Bochvar-Kleene; Sobocinski. We verify that none of these logics satisfies all the properties. Then, we consider our approach to conjunction and disjunction of conditional events in the setting of conditional random quantities. We verify that all the basic logical and probabilistic properties (included the Fr\'{e}chet-Hoeffding bounds) are preserved in our approach. We also recall the characterization of p-consistency and p-entailment by our notion of conjunction
    corecore