681 research outputs found

    A First-Order Logic for Reasoning about Knowledge and Probability

    Full text link
    We present a first-order probabilistic epistemic logic, which allows combining operators of knowledge and probability within a group of possibly infinitely many agents. The proposed framework is the first order extension of the logic of Fagin and Halpern from (J.ACM 41:340-367,1994). We define its syntax and semantics, and prove the strong completeness property of the corresponding axiomatic system.Comment: 29. pages. This paper is revised and extended version of the conference paper presented at the Thirteenth European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU 2015), in which we introduced the propositional variant of the logic presented here, using a similar axiomatization techniqu

    Automatic White-Box Testing of First-Order Logic Ontologies

    Full text link
    Formal ontologies are axiomatizations in a logic-based formalism. The development of formal ontologies, and their important role in the Semantic Web area, is generating considerable research on the use of automated reasoning techniques and tools that help in ontology engineering. One of the main aims is to refine and to improve axiomatizations for enabling automated reasoning tools to efficiently infer reliable information. Defects in the axiomatization can not only cause wrong inferences, but can also hinder the inference of expected information, either by increasing the computational cost of, or even preventing, the inference. In this paper, we introduce a novel, fully automatic white-box testing framework for first-order logic ontologies. Our methodology is based on the detection of inference-based redundancies in the given axiomatization. The application of the proposed testing method is fully automatic since a) the automated generation of tests is guided only by the syntax of axioms and b) the evaluation of tests is performed by automated theorem provers. Our proposal enables the detection of defects and serves to certify the grade of suitability --for reasoning purposes-- of every axiom. We formally define the set of tests that are generated from any axiom and prove that every test is logically related to redundancies in the axiom from which the test has been generated. We have implemented our method and used this implementation to automatically detect several non-trivial defects that were hidden in various first-order logic ontologies. Throughout the paper we provide illustrative examples of these defects, explain how they were found, and how each proof --given by an automated theorem-prover-- provides useful hints on the nature of each defect. Additionally, by correcting all the detected defects, we have obtained an improved version of one of the tested ontologies: Adimen-SUMO.Comment: 38 pages, 5 table

    Economic Theory in the Mathematical Mode

    Get PDF
    Lecture to the memory of Alfred Nobel, December 8, 1983general equilibrium;

    A THEORY OF RATIONAL CHOICE UNDER COMPLETE IGNORANCE

    Get PDF
    This paper contributes to a theory of rational choice under uncertainty for decision-makers whose preferences are exhaustively described by partial orders representing ""limited information."" Specifically, we consider the limiting case of ""Complete Ignorance"" decision problems characterized by maximally incomplete preferences and important primarily as reduced forms of general decision problems under uncertainty. ""Rationality"" is conceptualized in terms of a ""Principle of Preference-Basedness,"" according to which rational choice should be isomorphic to asserted preference. The main result characterizes axiomatically a new choice-rule called ""Simultaneous Expected Utility Maximization"" which in particular satisfies a choice-functional independence and a context-dependent choice-consistency condition; it can be interpreted as the fair agreement in a bargaining game (Kalai-Smorodinsky solution) whose players correspond to the different possible states (respectively extermal priors in the general case).

    Related families-based attribute reduction of dynamic covering information systems with variations of object sets

    Full text link
    In practice, there are many dynamic covering decision information systems, and knowledge reduction of dynamic covering decision information systems is a significant challenge of covering-based rough sets. In this paper, we first study mechanisms of constructing attribute reducts for consistent covering decision information systems when adding objects using related families. We also employ examples to illustrate how to construct attribute reducts of consistent covering decision information systems when adding objects. Then we investigate mechanisms of constructing attribute reducts for consistent covering decision information systems when deleting objects using related families. We also employ examples to illustrate how to construct attribute reducts of consistent covering decision information systems when deleting objects. Finally, the experimental results illustrates that the related family-based methods are effective to perform attribute reduction of dynamic covering decision information systems when object sets are varying with time.Comment: arXiv admin note: substantial text overlap with arXiv:1711.0732

    A Novel Approach for Mining Similarity Profiled Temporal Association Patterns

    Full text link
    The problem of frequent pattern mining from non-temporal databases is studied extensively by various researchers working in areas of data mining, temporal databases and information retrieval. However, Conventional frequent pattern algorithms are not suitable to find similar temporal association patterns from temporal databases. A Temporal database is a database which can store past, present and future information. The objective of this research is to come up with a novel approach so as to find similar temporal association patterns w.r.t user specified threshold and a given reference support time sequence using concept of Venn diagrams. For this, we maintain two types of supports called positive support and negative support values to find similar temporal association patterns of user interest. The main advantage of our method is that, it performs only a single scan of temporal database to find temporal association patterns similar to specified reference support sequence. This single database scan approach hence eliminates the huge overhead incurred when the database is scanned multiple times. The present approach also eliminates the need to compute and maintain true support values of all the subsets of temporal patterns of previous stages when computing temporal patterns of next stage.Comment: Technical Journal of the Faculty of Engineering, 14 page

    The many Shapley values for model explanation

    Full text link
    The Shapley value has become a popular method to attribute the prediction of a machine-learning model on an input to its base features. The use of the Shapley value is justified by citing [16] showing that it is the \emph{unique} method that satisfies certain good properties (\emph{axioms}). There are, however, a multiplicity of ways in which the Shapley value is operationalized in the attribution problem. These differ in how they reference the model, the training data, and the explanation context. These give very different results, rendering the uniqueness result meaningless. Furthermore, we find that previously proposed approaches can produce counterintuitive attributions in theory and in practice---for instance, they can assign non-zero attributions to features that are not even referenced by the model. In this paper, we use the axiomatic approach to study the differences between some of the many operationalizations of the Shapley value for attribution, and propose a technique called Baseline Shapley (BShap) that is backed by a proper uniqueness result. We also contrast BShap with Integrated Gradients, another extension of Shapley value to the continuous setting.Comment: 9 page

    Axiomatic foundations of nonrelativistic quantum mechanics: a realistic approach

    Get PDF
    A realistic axiomatic formulation of nonrelativistic quantum mechanics for a single microsystem with spin is presented, from which the most important theorems of the theory can be deduced. In comparison with previous formulations, the formal aspect has been improved by the use of certain mathematical theories, such as the theory of equipped spaces, and group theory. The standard formalism is naturally obtained from the latter, starting from a central primitive concept: the Galilei group

    Distances between Data Sets Based on Summary Statistics

    Full text link
    The concepts of similarity and distance are crucial in data mining. We consider the problem of defining the distance between two data sets by comparing summary statistics computed from the data sets. The initial definition of our distance is based on geometrical notions of certain sets of distributions. We show that this distance can be computed in cubic time and that it has several intuitive properties. We also show that this distance is the unique Mahalanobis distance satisfying certain assumptions. We also demonstrate that if we are dealing with binary data sets, then the distance can be represented naturally by certain parity functions, and that it can be evaluated in linear time. Our empirical tests with real world data show that the distance works well
    • 

    corecore