101 research outputs found

    Choquet Integrals With Respect to Non-Monotonic Set Functions

    Get PDF
    This paper introduces the signed Choquet integral, i.e., a nonmonotonic generalization of the Choquet integral. Applications to welfare theory, multi-period optimization, and asset pricing are described.Choquet integral;comonotonicity;arbitrage;time preference

    On non-monotonic Choquet integrals as aggregation functions

    Get PDF
    This paper deals with non-monotonic Choquet integral, a generalization of the regular Choquet integral. The discrete non-monotonic Choquet integral is considered under the viewpoint of aggregation. In particular we give an axiomatic characterization of the class of non-monotonic Choquet integrals.We show how the Shapley index, in contrast with the monotonic case, can assume positive values if the criterion is in average a benefit, depending on its effect in all the possible coalition coalitions, and negative values in the opposite case of a cost criterion.

    Choquet Integrals With Respect to Non-Monotonic Set Functions

    Get PDF
    This paper introduces the signed Choquet integral, i.e., a nonmonotonic generalization of the Choquet integral. Applications to welfare theory, multi-period optimization, and asset pricing are described.

    Modelling fraud detection by attack trees and Choquet integral

    Get PDF
    Modelling an attack tree is basically a matter of associating a logical ÒndÓand a logical ÒrÓ but in most of real world applications related to fraud management the Ònd/orÓlogic is not adequate to effectively represent the relationship between a parent node and its children, most of all when information about attributes is associated to the nodes and the main problem to solve is how to promulgate attribute values up the tree through recursive aggregation operations occurring at the Ònd/orÓnodes. OWA-based aggregations have been introduced to generalize ÒndÓand ÒrÓoperators starting from the observation that in between the extremes Òor allÓ(and) and Òor anyÓ(or), terms (quantifiers) like ÒeveralÓ ÒostÓ ÒewÓ ÒomeÓ etc. can be introduced to represent the different weights associated to the nodes in the aggregation. The aggregation process taking place at an OWA node depends on the ordered position of the child nodes but it doesnÕ take care of the possible interactions between the nodes. In this paper, we propose to overcome this drawback introducing the Choquet integral whose distinguished feature is to be able to take into account the interaction between nodes. At first, the attack tree is valuated recursively through a bottom-up algorithm whose complexity is linear versus the number of nodes and exponential for every node. Then, the algorithm is extended assuming that the attribute values in the leaves are unimodal LR fuzzy numbers and the calculation of Choquet integral is carried out using the alpha-cuts.Fraud detection; attack tree; ordered weighted averaging (OWA) operator; Choquet integral; fuzzy numbers.

    Autocontinuity and convergence theorems for the Choquet integral

    Get PDF
    Our aim is to provide some convergence theorems for the Choquet integral with respect to various notions of convergence. For instance, the dominated convergence theorem for almost uniform convergence is related to autocontinuous set functions. Autocontinuity can also be related to convergence in measure, strict convergence or mean convergence. Whereas the monotone convergence theorem for almost uniform convergence is related to monotone autocontinuity, a weaker version than autocontinuity.

    A Radon-Nikodym derivative for almost subadditive set functions

    Get PDF
    In classical measure theory, the Radon-Nikodym theorem states in a concise condition, namely domination, how a measure can be factorized by another (bounded) measure through a density function. Several approaches have been undertaken to see under which conditions an exact factorization can be obtained with set functions that are not σ-additive (for instance finitely additive set functions or submeasures). We provide a Radon-Nikodym type theorem with respect to a measure for almost subadditive set functions of bounded sum. The necessary and sufficient condition to guarantee a one-sided Radon-Nikodym derivative remains the standard domination condition for measures.

    SUR DES MODELES NON-ADDITIFS EN THEORIE DES CHOIX INTERTEMPORELS ET DE LA DECISION DANS L'INCERTAIN

    No full text
    Savage's (1954) model constitutes a main achievement in decision theory under uncertainty. It provides an axiomatization of subjective expected utility. A decision maker who fulfills the Savage's axioms chooses between acts according to their expected utility. Following this axiomatic method, additive representation can be obtained in different settings (Anscombe-Aumann (1963), Wakker (1990)). Despite the normative character of subjective expected utility models, empirical refutations arise quickly, for instance Ellsberg's paradox (1961). Choquet expected utility models can provide a response. Henceforth a decision maker does not have a subjective probability anymore but a subjective capacity (Choquet (1953)), a monotonic set function which is not necessarily additive. An integral theory with respect to capacities introduced by Choquet (1953), rediscovered and developed by Schmeidler (1986,1989) allows a generalization of the expected utility criterion. The axiomatic of Choquet expected utility models elaborated in an uncertainty framework can also be adapted in a temporal one. Hence the evaluation of a stream of incomes can be made in a non-additive way and embody variations between different successives periods (Gilboa (1989), De Waegenaere and Wakker (2001)). The first chapter deals with the integral representation of comonotonic additive and sequentially continuous from below or from above functionals. This representation through Choquet's (1953) integrals is based on sequential continuity, a natural condition in measure theory, and not on monotonicity as in Schmeidler (1986). Consequently games we consider are not necessarily monotonic but continuous from below or from above, properties which are equivalent to sigma-additivity for additive games. Finally, we provide some representation theorems for non-monotonic preferences but sequentially continuous from above or from below. The second chapter provides an axiomatization of some preferences in a temporal setting, which originates in Gilboa (1989) and carried on in Shalev (1997) in an Anscombe-Aumann's (1963) setting. We adopt here De Waegaenere and Wakker's (2001) method. Our aim is to take into account complementarities between different successive periods. For this we introduce a variation aversion axiom, that keeps additivity on income streams having the property of sequential comonotony. The extension to the infinite case is achieved through a behavioral axiom, myopia. Finally we present a generalization to the non-additive case of the discounted expected utility, axiomatized in Koopmans (1972). In the third chapter, we establish a Yosida-Hewitt(1952) decomposition theorem for totally monotone games on N, where any game is the sum of a sigma-continuous game and a pure game. This composition is obtained from an integral representation theorem on the set of belief functions, hence the Choquet (1953) integral of any bounded function, with respect to a totally monotone game admits an integral representation. Finally to every totally monotone sigma-continuous game is associated a unique M¨obius inverse on N; hence any Choquet integral of a bounded function on N with respect to a totally monotone sigma-continuous game obtains as the sum of an absolutely convergent series. The last chapter, deals with modelization of patience for countable streams of incomes. At first, we consider preferences that exhibits patience in the additive case. These preferences admit an integral representation with respect to pure probabilities, which coincide with Banach limits (Banach 1987). Then, we strenghten patience into time invariance. Lastly we consider naive patience, which leads to an impossibility theorem. Consequently, we give an extension of the preceeding results in a non-additive framework. We introduce a non-smooth additivity axiom which allows to represent preferences through a Choquet integral with convex capacity. In this case, patience translates into pure convex capacities. Likewise, time invariance expresses naturally in term of invariant convex capacities. Finally, naive patience admits for unique representation the inferior limit functional.Le modèle de Savage (1954) est une référence dans le domaine de la théorie de la décision dans l'incertain. Il présente une axiomatisation de l'espérance d'utilité subjective. Un décideur qui satisfait aux axiomes de Savage choisit entre différents actes d'après leur évaluation selon leur utilité espérée. En suivant cette méthode axiomatique, une représentation additive peut être obtenue dans différents cadres (Anscombe-Aumann (1963), Wakker (1990)). En dépit du caractère normatif des modèles d'espérance d'utilité subjective, des réfutations empiriques apparaissent rapidement, parmi elles le paradoxe d'Ellsberg (1961). Parmi les réponses apportées au paradoxe d'Ellsberg figurent les modèles d'espérance d'utilité à la Choquet. Désormais un décideur possède non pas une probabilité subjective mais une capacité (Choquet (1953)) subjective, fonction d'ensembles monotone qui n'est plus nécessairement additive. Une théorie de l'intégration par rapport aux capacités introduite par Choquet(1953), retrouvée et développée par Schmeidler (1986,1989) permet ainsi de généraliser le critère d'espérance d'utilité. L'axiomatique des modèles d'espérance d'utilité à la Choquet développée dans un cadre d'incertitude peut s'adapter également à un cadre temporel. Ainsi l'évaluation d'un flux de revenus peut se faire de manière non-additive et incorporer les variations entre différentes périodes successives (Gilboa (1989), De Waegenaere et Wakker (2001)). Le premier chapitre traite de la représentation intégrale des fonctionnelles comonotones additives et séquentiellement continues par en bas et/ou par en haut. Cette représentation à l'aide de l'intégrale de Choquet (1953) se base sur la continuité séquentielle, une condition usuelle en théorie de la mesure, et non pas sur la propriété de monotonie traitée par Schmeidler (1986). En conséquence les jeux considérés ici ne sont pas forcément monotones mais continus par en haut et/ou par en bas, propriétés équivalentes à la sigma-additivité dans le cas des jeux additifs. Finalement, nous proposons des théorèmes de représentation des préférences non-monotones mais séquentiellement continues par en haut ou par en bas. Le deuxième chapitre se propose d'axiomatiser certaines préférences dans un cadre temporel, méthode initiée par Gilboa (1989) et poursuivie par Shalev (1997) dans un cadre à la Anscombe-Aumann (1963). L'approche adoptée ici est similaire à celle de De Waegaenere et Wakker (2001). Notre approche a pour but de prendre en compte les complémentarités entre différentes périodes successives. Pour cela nous introduisons un axiome d'aversion aux variations, qui conserve l'additivité sur des flux de revenus ayant la propriété de séquentielle comonotonie. L'extension au cas infini est réalisée à partir d'un axiome comportemental : la myopie. Finalement nous présentons une généralisation au cas non-additif du modèle d'espérance escomptée, axiomatisé par Koopmans (1972). Dans le troisième chapitre, on établit un théorème de décomposition à la Yosida-Hewitt(1952) pour les jeux totalement monotones sur N, o`u tout jeu s'écrit comme somme d'un jeu sigma-continu et d'un jeu pur. Cette décomposition s'obtient à partir d'un théorème de représentation intégrale sur l'ensemble des fonctions de croyance, ainsi l'intégrale de Choquet (1953) de toute fonction bornée, par rapport à un jeu totalement monotone admet une représentation intégrale. Finalement tout jeu totalement monotone sigma-continu est mis en correspondance biunivoque avec une inverse de M¨obius sur N; ainsi toute intégrale de Choquet d'une fonction bornée sur N, par rapport à un jeu totalement monotone sigma-continu s'obtient comme somme d'une série absolument convergente. Le dernier chapitre, traite de la modélisation de la patience face à des flux dénombrables de revenus. Dans un premier temps, nous exposons les préférences patientes dans un cadre additif. Ces préférences admettent une représentation intégrale à l'aide de probabilités pures, ce qui coïncide en outre avec les limites de Banach (Banach 1987). Ensuite, nous renforçons la patience en invariance temporelle. Enfin nous considérons la patience naïve, ce qui aboutit à un théorème d'impossibilité. Nous développons de ce fait une extension des résultats obtenus précédemment dans un cadre non-additif. Nous introduisons un axiome d'additivité non-lisse qui nous permet de représenter les préférences avec une intégrale de Choquet à capacité convexe. Dans ce cas, la patience se traduit par des capacités convexes pures. De même, l'invariance temporelle s'exprime naturellement en terme de capacités convexes invariantes. En fin de compte, la patience naïve admet comme unique représentation la fonctionnelle limite inférieure

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data

    Bipolar aggregation method for fuzzy nominal classification using Weighted Cardinal Fuzzy Measure (WCFM)

    Get PDF
    The issue of designing a procedure to assign objects (candidates, projects, decisions, options, etc.) characterized by multiple attributes or criteria to predefined classes characterized by fuzzyly defined multiple features, conditions or constraints, is considered in this paper. Such assignment problems are known in the literature as nominal or non ordered classification problems as opposed to ordinal classification in which case classes are ordered according to some desires of decision maker(s). Because of the importance of these problems in many domains such as social, economics, medical, engineering, mangement etc., there is a need to design sound and appropriate evaluation algorithms and methods to deal with them. In this paper we will consider an approach based on an evaluation strategy that consists in aggregating separately elements that act in the same sens (either contributing to the exlusion of a class from assignment or its consideration for inclusion given an object) that we refer to as bipolar analysis. Then, relying on the fact that elements to aggregate have synergetic relationships (they are complementary), we propose to use Choquet integral as the appropriate aggregation operator with a proposed fuzzy measure or capacity known as weighted cardinal fuzzy measure (WCFM) which tractability permits to overcome dificulties that dissuade the use of Choquet integral in practices. Furthermore, bipolar property results in evaluation by two degrees: classifiability measure that measures to what extent an object can be considered for inclusion in a class and rejectability measure, a degree that measures the extent to which one must avoid including an object to a class rendering final choice flexible as many classes may be qualified for inclusion of an object. Application of this approach to a real world problem in the domain of banking has shown a real potentiality
    corecore