910 research outputs found

    The Shapley Value of Inconsistency Measures for Functional Dependencies

    Get PDF
    Quantifying the inconsistency of a database is motivated by various goals including reliability estimation for new datasets and progress indication in data cleaning. Another goal is to attribute to individual tuples a level of responsibility to the overall inconsistency, and thereby prioritize tuples in the explanation or inspection of dirt. Therefore, inconsistency quantification and attribution have been a subject of much research in Knowledge Representation and, more recently, in Databases. As in many other fields, a conventional responsibility sharing mechanism is the Shapley value from cooperative game theory. In this paper, we carry out a systematic investigation of the complexity of the Shapley value in common inconsistency measures for functional-dependency (FD) violations. For several measures we establish a full classification of the FD sets into tractable and intractable classes with respect to Shapley-value computation. We also study the complexity of approximation in intractable cases

    Measuring inconsistency in a network intrusion detection rule set based on Snort

    Get PDF
    In this preliminary study, we investigate how inconsistency in a network intrusion detection rule set can be measured. To achieve this, we first examine the structure of these rules which are based on Snort and incorporate regular expression (Regex) pattern matching. We then identify primitive elements in these rules in order to translate the rules into their (equivalent) logical forms and to establish connections between them. Additional rules from background knowledge are also introduced to make the correlations among rules more explicit. We measure the degree of inconsistency in formulae of such a rule set (using the Scoring function, Shapley inconsistency values and Blame measure for prioritized knowledge) and compare the *This is a revised and significantly extended version of [1]

    A decade of application of the Choquet and Sugeno integrals in multi-criteria decision aid

    Get PDF
    The main advances regarding the use of the Choquet and Sugeno integrals in multi-criteria decision aid over the last decade are reviewed. They concern mainly a bipolar extension of both the Choquet integral and the Sugeno integral, interesting particular submodels, new learning techniques, a better interpretation of the models and a better use of the Choquet integral in multi-criteria decision aid. Parallel to these theoretical works, the Choquet integral has been applied to many new fields, and several softwares and libraries dedicated to this model have been developed.Choquet integral, Sugeno integral, capacity, bipolarity, preferences

    Data Obsolescence Detection in the Light of Newly Acquired Valid Observations

    Full text link
    The information describing the conditions of a system or a person is constantly evolving and may become obsolete and contradict other information. A database, therefore, must be consistently updated upon the acquisition of new valid observations that contradict obsolete ones contained in the database. In this paper, we propose a novel approach for dealing with the information obsolescence problem. Our approach aims to detect, in real-time, contradictions between observations and then identify the obsolete ones, given a representation model. Since we work within an uncertain environment characterized by the lack of information, we choose to use a Bayesian network as our representation model and propose a new approximate concept, ϵ\epsilon-Contradiction. The new concept is parameterised by a confidence level of having a contradiction in a set of observations. We propose a polynomial-time algorithm for detecting obsolete information. We show that the resulting obsolete information is better represented by an AND-OR tree than a simple set of observations. Finally, we demonstrate the effectiveness of our approach on a real elderly fall-prevention database and showcase how this tree can be used to give reliable recommendations to doctors. Our experiments give systematically and substantially very good results

    Brittleness of Bayesian inference and new Selberg formulas

    Get PDF
    The incorporation of priors in the Optimal Uncertainty Quantification (OUQ) framework \cite{OSSMO:2011} reveals brittleness in Bayesian inference; a model may share an arbitrarily large number of finite-dimensional marginals with, or be arbitrarily close (in Prokhorov or total variation metrics) to, the data-generating distribution and still make the largest possible prediction error after conditioning on an arbitrarily large number of samples. The initial purpose of this paper is to unwrap this brittleness mechanism by providing (i) a quantitative version of the Brittleness Theorem of \cite{BayesOUQ} and (ii) a detailed and comprehensive analysis of its application to the revealing example of estimating the mean of a random variable on the unit interval [0,1][0,1] using priors that exactly capture the distribution of an arbitrarily large number of Hausdorff moments. However, in doing so, we discovered that the free parameter associated with Markov and Kre\u{\i}n's canonical representations of truncated Hausdorff moments generates reproducing kernel identities corresponding to reproducing kernel Hilbert spaces of polynomials. Furthermore, these reproducing identities lead to biorthogonal systems of Selberg integral formulas. This process of discovery appears to be generic: whereas Karlin and Shapley used Selberg's integral formula to first compute the volume of the Hausdorff moment space (the polytope defined by the first nn moments of a probability measure on the interval [0,1][0,1]), we observe that the computation of that volume along with higher order moments of the uniform measure on the moment space, using different finite-dimensional representations of subsets of the infinite-dimensional set of probability measures on [0,1][0,1] representing the first nn moments, leads to families of equalities corresponding to classical and new Selberg identities.Comment: 73 pages. Keywords: Bayesian inference, misspecification, robustness, uncertainty quantification, optimal uncertainty quantification, reproducing kernel Hilbert spaces (RKHS), Selberg integral formula

    Beyond Condorcet: Optimal Aggregation Rules Using Voting Records

    Get PDF
    The difficulty of optimal decision making in uncertain dichotomous choice settings is that it requires information on the expertise of the decision makers (voters). This paper presents a method of optimally weighting voters even without testing them against questions with known right answers. The method is based on the realization that if we can see how voters vote on a variety of questions, it is possible to gauge their respective degrees of expertise by comparing their votes in a suitable fashion, even without knowing the right answers.

    Multi-source heterogeneous intelligence fusion

    Get PDF

    Contextual and Possibilistic Reasoning for Coalition Formation

    Get PDF
    In multiagent systems, agents often have to rely on other agents to reach their goals, for example when they lack a needed resource or do not have the capability to perform a required action. Agents therefore need to cooperate. Then, some of the questions raised are: Which agent(s) to cooperate with? What are the potential coalitions in which agents can achieve their goals? As the number of possibilities is potentially quite large, how to automate the process? And then, how to select the most appropriate coalition, taking into account the uncertainty in the agents' abilities to carry out certain tasks? In this article, we address the question of how to find and evaluate coalitions among agents in multiagent systems using MCS tools, while taking into consideration the uncertainty around the agents' actions. Our methodology is the following: We first compute the solution space for the formation of coalitions using a contextual reasoning approach. Second, we model agents as contexts in Multi-Context Systems (MCS), and dependence relations among agents seeking to achieve their goals, as bridge rules. Third, we systematically compute all potential coalitions using algorithms for MCS equilibria, and given a set of functional and non-functional requirements, we propose ways to select the best solutions. Finally, in order to handle the uncertainty in the agents' actions, we extend our approach with features of possibilistic reasoning. We illustrate our approach with an example from robotics
    corecore