270 research outputs found
Unifying Practical Uncertainty Representations: II. Clouds
There exist many simple tools for jointly capturing variability and
incomplete information by means of uncertainty representations. Among them are
random sets, possibility distributions, probability intervals, and the more
recent Ferson's p-boxes and Neumaier's clouds, both defined by pairs of
possibility distributions. In the companion paper, we have extensively studied
a generalized form of p-box and situated it with respect to other models . This
paper focuses on the links between clouds and other representations.
Generalized p-boxes are shown to be clouds with comonotonic distributions. In
general, clouds cannot always be represented by random sets, in fact not even
by 2-monotone (convex) capacities.Comment: 30 pages, 7 figures, Pre-print of journal paper to be published in
International Journal of Approximate Reasoning (with expanded section
concerning clouds and probability intervals
A kernel-based framework for learning graded relations from data
Driven by a large number of potential applications in areas like
bioinformatics, information retrieval and social network analysis, the problem
setting of inferring relations between pairs of data objects has recently been
investigated quite intensively in the machine learning community. To this end,
current approaches typically consider datasets containing crisp relations, so
that standard classification methods can be adopted. However, relations between
objects like similarities and preferences are often expressed in a graded
manner in real-world applications. A general kernel-based framework for
learning relations from data is introduced here. It extends existing approaches
because both crisp and graded relations are considered, and it unifies existing
approaches because different types of graded relations can be modeled,
including symmetric and reciprocal relations. This framework establishes
important links between recent developments in fuzzy set theory and machine
learning. Its usefulness is demonstrated through various experiments on
synthetic and real-world data.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Other uncertainty theories based on capacities
International audienceThe two main uncertainty representations in the literature that tolerate imprecision are possibility distributions and random disjunctive sets. This chapter devotes special attention to the theories that have emerged from them. The first part of the chapter discusses epistemic logic and derives the need for capturing imprecision in information representations. It bridges the gap between uncertainty theories and epistemic logic showing that imprecise probabilities subsume modalities of possibility and necessity as much as probability. The second part presents possibility and evidence theories, their origins, assumptions and semantics, discusses the connections between them and the general framework of imprecise probability. Finally, chapter points out the remaining discrepancies between the different theories regarding various basic notions, such as conditioning, independence or information fusion and the existing bridges between them
"A Simple Axiomatization of Iterated Choquet Objectives"
A set of axioms which characterizes a preference representable by the iterated Choquet expected utility is presented. This objective function is attractive since it possesses a feature of dynamical consistency. Furthermore,we show that under the same axioms the conditional preference is represented by the Choquet expected utility with respect to the capacity which is updated according to the Dempster-Shafer rule. We do this by weakening Schmeidler's axiom of comonotonic independence to our axiom of constrained comonotonic independence and by adding the axiom of dynamical consistency.
Fitting aggregation operators to data
Theoretical advances in modelling aggregation of information produced a wide range of aggregation operators, applicable to almost every practical problem. The most important classes of aggregation operators include triangular norms, uninorms, generalised means and OWA operators.With such a variety, an important practical problem has emerged: how to fit the parameters/ weights of these families of aggregation operators to observed data? How to estimate quantitatively whether a given class of operators is suitable as a model in a given practical setting? Aggregation operators are rather special classes of functions, and thus they require specialised regression techniques, which would enforce important theoretical properties, like commutativity or associativity. My presentation will address this issue in detail, and will discuss various regression methods applicable specifically to t-norms, uninorms and generalised means. I will also demonstrate software implementing these regression techniques, which would allow practitioners to paste their data and obtain optimal parameters of the chosen family of operators.<br /
- …