531 research outputs found

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    Evaluation of Adaptive FRIFS Method through Several Classification Comparisons

    Get PDF
    International audienceAn iterative method to select suitable features for pattern recognition context has been proposed (FRIFS). It combines a global feature selection method based on the Choquet integral and a fuzzy linguistic rule classifier. In this paper, enhancements of this method are presented. An automatic step has been added to make it adaptive to process numerous features. The experimental study, made in a wood defect recognition context, is based on several classifier result analysis. They show the relevancy of the remaining set of selected features. The recognition rates are also considered for each class separately, showing the good behavior of the proposed method

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data

    Innovation and stock market performance : A model with ambiguity-averse agents

    Get PDF
    Empirical evidence on stock prices shows that firms investing successfully in radical innovation experience higher stock returns. This paper provides a model that sheds light on the relationship between the degree of firm innovativeness and stock returns, the movements of which capture expectations on firm\u2019s profitability and growth. The model is grounding on Neo-Schumpeterian growth models and relies on the crucial assumption of radical innovation, characterized by \u201cambiguity\u201d or Knightian uncertainty: due to its uniqueness and originality, no distribution of probability can be reasonably associated with radical innovation success or failure. Different preferences (\u3b1-maxmin, Choquet) are here compared. Results show that the assumption of ambiguity-aversion is crucial in determining higher returns in the presence of radical innovation and that the specific definition of expected utility shapes the extent of the returns. This result holds also in the case of endogenous innovation; risk attitude plays no role

    Possibilistic decision theory: from theoretical foundations to influence diagrams methodology

    Get PDF
    Le domaine de prise de décision est un domaine multidisciplinaire en relation avec plusieurs disciplines telles que l'économie, la recherche opérationnelle, etc. La théorie de l'utilité espérée a été proposée pour modéliser et résoudre les problèmes de décision. Ces théories ont été mises en cause par plusieurs paradoxes (Allais, Ellsberg) qui ont montré les limites de son applicabilité. Par ailleurs, le cadre probabiliste utilisé dans ces théories s'avère non approprié dans certaines situations particulières (ignorance totale, incertitude qualitative). Pour pallier ces limites, plusieurs travaux ont été élaborés concernant l'utilisation des intégrales de Choquet et de Sugeno comme critères de décision d'une part et l'utilisation d'une théorie d'incertitude autre que la théorie des probabilités pour la modélisation de l'incertitude d'une autre part. Notre idée principale est de profiter de ces deux directions de recherche afin de développer, dans le cadre de la décision séquentielle, des modèles de décision qui se basent sur les intégrales de Choquet comme critères de décision et sur la théorie des possibilités pour la représentation de l'incertitude. Notre objectif est de développer des modèles graphiques décisionnels, qui représentent des modèles compacts et simples pour la prise de décision dans un contexte possibiliste. Nous nous intéressons en particulier aux arbres de décision et aux diagrammes d'influence possibilistes et à leurs algorithmes d'évaluation.The field of decision making is a multidisciplinary field in relation with several disciplines such as economics, operations research, etc. Theory of expected utility has been proposed to model and solve decision problems. These theories have been questioned by several paradoxes (Allais, Ellsberg) who have shown the limits of its applicability. Moreover, the probabilistic framework used in these theories is not appropriate in particular situations (total ignorance, qualitative uncertainty). To overcome these limitations, several studies have been developed basing on the use of Choquet and Sugeno integrals as decision criteria and a non classical theory to model uncertainty. Our main idea is to use these two lines of research to develop, within the framework of sequential decision making, decision models based on Choquet integrals as decision criteria and possibility theory to represent uncertainty. Our goal is to develop graphical decision models that represent compact models for decision making when uncertainty is represented using possibility theory. We are particularly interested by possibilistic decision trees and influence diagrams and their evaluation algorithms
    corecore