1,355 research outputs found

    ProofPeer - A Cloud-based Interactive Theorem Proving System

    Get PDF
    ProofPeer strives to be a system for cloud-based interactive theorem proving. After illustrating why such a system is needed, the paper presents some of the design challenges that ProofPeer needs to meet to succeed. Contexts are presented as a solution to the problem of sharing proof state among the users of ProofPeer. Chronicles are introduced as a way to organize and version contexts

    Incorrect Responses in First-Order False-Belief Tests:A Hybrid-Logical Formalization

    Get PDF
    In the paper (BraĆ¼ner, 2014) we were concerned with logical formalizations of the reasoning involved in giving correct responses to the psychological tests called the Sally-Anne test and the Smarties test, which test childrenā€™s ability to ascribe false beliefs to others. A key feature of the formal proofs given in that paper is that they explicitly formalize the perspective shift to another person that is required for figuring out the correct answers ā€“ you have to put yourself in another personā€™s shoes, so to speak, to give the correct answer. We shall in the present paper be concerned with what happens when answers are given that are not correct. The typical incorrect answers indicate that children failing false-belief tests have problems shifting to a perspective different from their own, to be more precise, they simply reason from their own perspective. Based on this hypothesis, we in the present paper give logical formalizations that in a systematic way model the typical incorrect answers. The remarkable fact that the incorrect answers can be derived using logically correct rules indicates that the origin of the mistakes does not lie in the childrenā€™s logical reasoning, but rather in a wrong interpretation of the task

    Temporal Data Modeling and Reasoning for Information Systems

    Get PDF
    Temporal knowledge representation and reasoning is a major research field in Artificial Intelligence, in Database Systems, and in Web and Semantic Web research. The ability to model and process time and calendar data is essential for many applications like appointment scheduling, planning, Web services, temporal and active database systems, adaptive Web applications, and mobile computing applications. This article aims at three complementary goals. First, to provide with a general background in temporal data modeling and reasoning approaches. Second, to serve as an orientation guide for further specific reading. Third, to point to new application fields and research perspectives on temporal knowledge representation and reasoning in the Web and Semantic Web

    The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

    Full text link
    The nascent field of fair machine learning aims to ensure that decisions guided by algorithms are equitable. Over the last several years, three formal definitions of fairness have gained prominence: (1) anti-classification, meaning that protected attributes---like race, gender, and their proxies---are not explicitly used to make decisions; (2) classification parity, meaning that common measures of predictive performance (e.g., false positive and false negative rates) are equal across groups defined by the protected attributes; and (3) calibration, meaning that conditional on risk estimates, outcomes are independent of protected attributes. Here we show that all three of these fairness definitions suffer from significant statistical limitations. Requiring anti-classification or classification parity can, perversely, harm the very groups they were designed to protect; and calibration, though generally desirable, provides little guarantee that decisions are equitable. In contrast to these formal fairness criteria, we argue that it is often preferable to treat similarly risky people similarly, based on the most statistically accurate estimates of risk that one can produce. Such a strategy, while not universally applicable, often aligns well with policy objectives; notably, this strategy will typically violate both anti-classification and classification parity. In practice, it requires significant effort to construct suitable risk estimates. One must carefully define and measure the targets of prediction to avoid retrenching biases in the data. But, importantly, one cannot generally address these difficulties by requiring that algorithms satisfy popular mathematical formalizations of fairness. By highlighting these challenges in the foundation of fair machine learning, we hope to help researchers and practitioners productively advance the area

    Towards MKM in the Large: Modular Representation and Scalable Software Architecture

    Full text link
    MKM has been defined as the quest for technologies to manage mathematical knowledge. MKM "in the small" is well-studied, so the real problem is to scale up to large, highly interconnected corpora: "MKM in the large". We contend that advances in two areas are needed to reach this goal. We need representation languages that support incremental processing of all primitive MKM operations, and we need software architectures and implementations that implement these operations scalably on large knowledge bases. We present instances of both in this paper: the MMT framework for modular theory-graphs that integrates meta-logical foundations, which forms the base of the next OMDoc version; and TNTBase, a versioned storage system for XML-based document formats. TNTBase becomes an MMT database by instantiating it with special MKM operations for MMT.Comment: To appear in The 9th International Conference on Mathematical Knowledge Management: MKM 201
    • ā€¦
    corecore