309 research outputs found

    On the semantics of fuzzy logic

    Get PDF
    AbstractThis paper presents a formal characterization of the major concepts and constructs of fuzzy logic in terms of notions of distance, closeness, and similarity between pairs of possible worlds. The formalism is a direct extension (by recognition of multiple degrees of accessibility, conceivability, or reachability) of the najor modal logic concepts of possible and necessary truth.Given a function that maps pairs of possible worlds into a number between 0 and 1, generalizing the conventional concept of an equivalence relation, the major constructs of fuzzy logic (conditional and unconditioned possibility distributions) are defined in terms of this similarity relation using familiar concepts from the mathematical theory of metric spaces. This interpretation is different in nature and character from the typical, chance-oriented, meanings associated with probabilistic concepts, which are grounded on the mathematical notion of set measure. The similarity structure defines a topological notion of continuity in the space of possible worlds (and in that of its subsets, i.e., propositions) that allows a form of logical “extrapolation” between possible worlds.This logical extrapolation operation corresponds to the major deductive rule of fuzzy logic — the compositional rule of inference or generalized modus ponens of Zadeh — an inferential operation that generalizes its classical counterpart by virtue of its ability to be utilized when propositions representing available evidence match only approximately the antecedents of conditional propositions. The relations between the similarity-based interpretation of the role of conditional possibility distributions and the approximate inferential procedures of Baldwin are also discussed.A straightforward extension of the theory to the case where the similarity scale is symbolic rather than numeric is described. The problem of generating similarity functions from a given set of possibility distributions, with the latter interpreted as defining a number of (graded) discernibility relations and the former as the result of combining them into a joint measure of distinguishability between possible worlds, is briefly discussed

    Nonparametric regression analysis of uncertain and imprecise data using belief functions

    Get PDF
    AbstractThis paper introduces a new approach to regression analysis based on a fuzzy extension of belief function theory. For a given input vector x, the method provides a prediction regarding the value of the output variable y, in the form of a fuzzy belief assignment (FBA), defined as a collection of fuzzy sets of values with associated masses of belief. The output FBA is computed using a nonparametric, instance-based approach: training samples in the neighborhood of x are considered as sources of partial information on the response variable; the pieces of evidence are discounted as a function of their distance to x, and pooled using Dempster’s rule of combination. The method can cope with heterogeneous training data, including numbers, intervals, fuzzy numbers, and, more generally, fuzzy belief assignments, a convenient formalism for modeling unreliable and imprecise information provided by experts or multi-sensor systems. The performances of the method are compared to those of standard regression techniques using several simulated data sets

    An introduction to DSmT

    Get PDF
    The management and combination of uncertain, imprecise, fuzzy and even paradoxical or high conflicting sources of information has always been, and still remains today, of primal importance for the development of reliable modern information systems involving artificial reasoning. In this introduction, we present a survey of our recent theory of plausible and paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT), developed for dealing with imprecise, uncertain and conflicting sources of information. We focus our presentation on the foundations of DSmT and on its most important rules of combination, rather than on browsing specific applications of DSmT available in literature. Several simple examples are given throughout this presentation to show the efficiency and the generality of this new approach

    How to Treat Expert Judgment? With certainty it contains uncertainty!

    Get PDF
    PresentationTo be acceptably safe one must identify the risks one is exposed to. It is uncertain whether the threat really will materialize, but determining the size and probability of the risk is also full of uncertainty. When performing an analysis and preparing for decision making under uncertainty, quite frequently failure rate data, information on consequence severity or on a probability value, yes, even on the possibility an event can or cannot occur is lacking. In those cases, the only way to proceed is to revert to expert judgment. Even in case historical data are available, but one should like to know whether these data still hold in the current situation, an expert can be asked about their reliability. Anyhow, expert elicitation comes with an uncertainty depending on the expert’s reliability, which becomes very visible when two or more experts give different answers or even conflicting ones. This is not a new problem, and very bright minds have thought how to tackle it. But so far, however, the topic has not been given much attention in process safety and risk assessment. The paper has a review character and will present various approaches with detailed explanation and examples

    Complementary Lipschitz continuity results for the distribution of intersections or unions of independent random sets in finite discrete spaces

    Get PDF
    We prove that intersections and unions of independent random sets in finite spaces achieve a form of Lipschitz continuity. More precisely, given the distribution of a random set Ξ\Xi, the function mapping any random set distribution to the distribution of its intersection (under independence assumption) with Ξ\Xi is Lipschitz continuous with unit Lipschitz constant if the space of random set distributions is endowed with a metric defined as the LkL_k norm distance between inclusion functionals also known as commonalities. Moreover, the function mapping any random set distribution to the distribution of its union (under independence assumption) with Ξ\Xi is Lipschitz continuous with unit Lipschitz constant if the space of random set distributions is endowed with a metric defined as the LkL_k norm distance between hitting functionals also known as plausibilities. Using the epistemic random set interpretation of belief functions, we also discuss the ability of these distances to yield conflict measures. All the proofs in this paper are derived in the framework of Dempster-Shafer belief functions. Let alone the discussion on conflict measures, it is straightforward to transcribe the proofs into the general (non necessarily epistemic) random set terminology

    Epistemic Uncertainty Quantification in Scientific Models

    Get PDF
    In the field of uncertainty quantification (UQ), epistemic uncertainty often refers to the kind of uncertainty whose complete probabilistic description is not available, largely due to our lack of knowledge about the uncertainty. Quantification of the impacts of epistemic uncertainty is naturally difficult, because most of the existing stochastic tools rely on the specification of the probability distributions and thus do not readily apply to epistemic uncertainty. And there have been few studies and methods to deal with epistemic uncertainty. A recent work can be found in [J. Jakeman, M. Eldred, D. Xiu, Numerical approach for quantification of epistemic uncertainty, J. Comput. Phys. 229 (2010) 4648-4663], where a framework for numerical treatment of epistemic uncertainty was proposed. In this paper, firstly, we present a new method, similar to that of Jakeman et al. but significantly extending its capabilities. Most notably, the new method (1) does not require the encapsulation problem to be in a bounded domain such as a hypercube; (2) does not require the solution of the encapsulation problem to converge point-wise. In the current formulation, the encapsulation problem could reside in an unbounded domain, and more importantly, its numerical approximation could be sought in Lp norm. These features thus make the new approach more flexible and amicable to practical implementation. Both the mathematical framework and numerical analysis are presented to demonstrate the effectiveness of the new approach. And then, we apply this methods to work with one of the more restrictive uncertainty models, i.e., the fuzzy logic, where the p-distance, the weighted expected value and variance are defined to assess the accuracy of the solutions. At last, we give a brief introduction to our future work, which is epistemic uncertainty quantification using evidence theory

    Incorporating knowledge uncertainty into species distribution modelling

    Get PDF
    Monitoring progress towards global goals and biodiversity targets require reliable descriptions of species distributions over time and space. Current gaps in accessible information on species distributions urges the need for integrating all available data and knowledge sources, and intensifying cooperations to more effectively support global environmental governance. For many areas and species groups, experts can constitute a valuable source of information to fill the gaps by offering their knowledge on species-environment interactions. However, expert knowledge is always subject to uncertainty, and incorporating that into species distribution mapping poses a challenge. We propose the use of the dempster–shafer theory of evidence (DST) as a novel approach in this field to extract expert knowledge, to incorporate the associated uncertainty into the procedure, and to produce reliable species distribution maps. We applied DST to model the distribution of two species of eagle in Spain. We invited experts to fill in an online questionnaire and express their beliefs on the habitat of the species by assigning probability values for given environmental variables, along with their confidence in expressing the beliefs. We then calculated evidential functions, and combined them using Dempster’s rules of combination to map the species distribution based on the experts’ knowledge. We evaluated the performances of our proposed approach using the atlas of Spanish breeding birds as an independent test dataset, and further compared the results with the outcome of an ensemble of conventional SDMs. Purely based on expert knowledge, the DST approach yielded similar results as the data driven SDMs ensemble. Our proposed approach offers a strong and practical alternative for species distribution modelling when species occurrence data are not accessible, or reliable, or both. The particular strengths of the proposed approach are that it explicitly accounts for and aggregates knowledge uncertainty, and it capitalizes on the range of data sources usually considered by an expert
    corecore