5,141 research outputs found
An introduction to DSmT
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this
introduction, we present a survey of our recent theory of plausible and
paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT), developed for
dealing with imprecise, uncertain and conflicting sources of information. We
focus our presentation on the foundations of DSmT and on its most important
rules of combination, rather than on browsing specific applications of DSmT
available in literature. Several simple examples are given throughout this
presentation to show the efficiency and the generality of this new approach
Fusion of imprecise qualitative information
In this paper, we present a new 2-tuple linguistic representation model, i.e. Distribution Function Model (DFM), for combining imprecise qualitative information using fusion rules drawn from Dezert-Smarandache Theory (DSmT) framework. Such new approach allows to preserve the precision and efficiency of the combination of linguistic information in the case of either equidistant or unbalanced label model. Some basic operators on imprecise 2-tuple labels are presented together with their extensions for imprecise 2-tuple labels. We also give simple examples to show how precise and imprecise qualitative information can be combined for reasoning under uncertainty. It is concluded that DSmT can deal efficiently with both precise and imprecise quantitative and qualitative beliefs, which extends the scope of this theory
Probabilistic Opinion Pooling with Imprecise Probabilities
The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution (Madansky 44; Lehrer and Wagner 34; McConway Journal of the American Statistical Association, 76(374), 410--414, 45; Bordley Management Science, 28(10), 1137--1148, 5; Genest et al. The Annals of Statistics, 487--501, 21; Genest and Zidek Statistical Science, 114--135, 23; Mongin Journal of Economic Theory, 66(2), 313--351, 46; Clemen and Winkler Risk Analysis, 19(2), 187--203, 7; Dietrich and List 14; Herzberg Theory and Decision, 1--19, 28). We argue that this assumption is not always in order. We show how to extend the canonical mathematical framework for pooling to cover pooling with imprecise probabilities (IP) by employing set-valued pooling functions and generalizing common pooling axioms accordingly. As a proof of concept, we then show that one IP construction satisfies a number of central pooling axioms that are not jointly satisfied by any of the standard pooling recipes on pain of triviality. Following Levi (Synthese, 62(1), 3--11, 39), we also argue that IP models admit of a much better philosophical motivation as a model of rational consensus
SPOCC: Scalable POssibilistic Classifier Combination -- toward robust aggregation of classifiers
We investigate a problem in which each member of a group of learners is
trained separately to solve the same classification task. Each learner has
access to a training dataset (possibly with overlap across learners) but each
trained classifier can be evaluated on a validation dataset. We propose a new
approach to aggregate the learner predictions in the possibility theory
framework. For each classifier prediction, we build a possibility distribution
assessing how likely the classifier prediction is correct using frequentist
probabilities estimated on the validation set. The possibility distributions
are aggregated using an adaptive t-norm that can accommodate dependency and
poor accuracy of the classifier predictions. We prove that the proposed
approach possesses a number of desirable classifier combination robustness
properties
Decision-Making in the Context of Imprecise Probabilistic Beliefs
Coherent imprecise probabilistic beliefs are modelled as incomplete comparative likelihood relations admitting a multiple-prior representation. Under a structural assumption of Equidivisibility, we provide an axiomatization of such relations and show uniqueness of the representation. In the second part of the paper, we formulate a behaviorally general axiom relating preferences and probabilistic beliefs which implies that preferences over unambiguous acts are probabilistically sophisticated and which entails representability of preferences over Savage acts in an Anscombe-Aumann-style framework. The motivation for an explicit and separate axiomatization of beliefs for the study of decision-making under ambiguity is discussed in some detail.
The induced 2-tuple linguistic generalized OWA operator and its application in linguistic decision making
We present the induced 2-tuple linguistic generalized ordered weighted averaging (2-TILGOWA) operator. This new aggregation operator extends previous approaches by using generalized means, order-inducing variables in the reordering of the arguments and linguistic information represented with the 2-tuple linguistic approach. Its main advantage is that it includes a wide range of linguistic aggregation operators. Thus, its analyses can be seen from different perspectives and we obtain a much more complete picture of the situation considered and are able to select the alternative that best fits with with our interests or beliefs. We further generalize the operator by using quasi-arithmetic means, and obtain the Quasi-2-TILOWA operator. We conclude this paper by analysing the applicability of this new approach in a decision-making problem concerning product management.linguistic decision making, linguistic generalized mean, 2-tuple linguistic owa operator, 2-tuple linguistic aggregation operator
- …