3 research outputs found

    Modeling Uncertain and Vague Knowledge in Possibility and Evidence Theories

    No full text
    International audienceThis chapter discusses the usefulness of new theories of uncertainty for the purpose of modeling some facets of uncertain knowledge, especially vagueness, in artificial intelligence . It can be viewed as a partial reply to Cheeseman's defense of probability. In spite of the growing bulk of works dealing with deviant models of uncertainty in artificial intelligence, there is a strong reaction of classical probability tenants claiming that new uncertainty theories are at best unnecessary and at worst misleading. Interestingly enough, however, the trend to go beyond probabilistic models of subjective uncertainty is emerging even in the orthodox field of decision theory to account for the systematic deviations of human behavior from the expected utility models. The chapter presents the points of view of probability theory and those of two presently popular alternative settings: possibility theory and the theory of evidence. It discusses why probability measures cannot account for all the facets of uncertainty, especially partial ignorance, imprecision, vagueness, and how the other theories can do the job without rejecting the laws of probability when they apply

    Modeling uncertain and vague knowledge in possibility and evidence theories

    No full text
    International audienceThis paper advocates the usefulness of new theories of uncertainty for the purpose of modeling some facets of uncertain knowledge, especially vagueness, in AI. It can be viewed as a partial reply to Cheeseman's (among others) defense of probability
    corecore