3,450 research outputs found

    A proof-theoretic analysis of the classical propositional matrix method

    Get PDF
    The matrix method, due to Bibel and Andrews, is a proof procedure designed for automated theorem-proving. We show that underlying this method is a fully structured combinatorial model of conventional classical proof theory. © 2012 The Author, 2012. Published by Oxford University Press

    An Introduction to Mechanized Reasoning

    Get PDF
    Mechanized reasoning uses computers to verify proofs and to help discover new theorems. Computer scientists have applied mechanized reasoning to economic problems but -- to date -- this work has not yet been properly presented in economics journals. We introduce mechanized reasoning to economists in three ways. First, we introduce mechanized reasoning in general, describing both the techniques and their successful applications. Second, we explain how mechanized reasoning has been applied to economic problems, concentrating on the two domains that have attracted the most attention: social choice theory and auction theory. Finally, we present a detailed example of mechanized reasoning in practice by means of a proof of Vickrey's familiar theorem on second-price auctions

    Geometric Aspects of Multiagent Systems

    Get PDF
    Recent advances in Multiagent Systems (MAS) and Epistemic Logic within Distributed Systems Theory, have used various combinatorial structures that model both the geometry of the systems and the Kripke model structure of models for the logic. Examining one of the simpler versions of these models, interpreted systems, and the related Kripke semantics of the logic S5nS5_n (an epistemic logic with nn-agents), the similarities with the geometric / homotopy theoretic structure of groupoid atlases is striking. These latter objects arise in problems within algebraic K-theory, an area of algebra linked to the study of decomposition and normal form theorems in linear algebra. They have a natural well structured notion of path and constructions of path objects, etc., that yield a rich homotopy theory.Comment: 14 pages, 1 eps figure, prepared for GETCO200

    Possible Patterns

    Get PDF
    “There are no gaps in logical space,” David Lewis writes, giving voice to sentiment shared by many philosophers. But different natural ways of trying to make this sentiment precise turn out to conflict with one another. One is a *pattern* idea: “Any pattern of instantiation is metaphysically possible.” Another is a *cut and paste* idea: “For any objects in any worlds, there exists a world that contains any number of duplicates of all of those objects.” We use resources from model theory to show the inconsistency of certain packages of combinatorial principles and the consistency of others

    Changing a semantics: opportunism or courage?

    Full text link
    The generalized models for higher-order logics introduced by Leon Henkin, and their multiple offspring over the years, have become a standard tool in many areas of logic. Even so, discussion has persisted about their technical status, and perhaps even their conceptual legitimacy. This paper gives a systematic view of generalized model techniques, discusses what they mean in mathematical and philosophical terms, and presents a few technical themes and results about their role in algebraic representation, calibrating provability, lowering complexity, understanding fixed-point logics, and achieving set-theoretic absoluteness. We also show how thinking about Henkin's approach to semantics of logical systems in this generality can yield new results, dispelling the impression of adhocness. This paper is dedicated to Leon Henkin, a deep logician who has changed the way we all work, while also being an always open, modest, and encouraging colleague and friend.Comment: 27 pages. To appear in: The life and work of Leon Henkin: Essays on his contributions (Studies in Universal Logic) eds: Manzano, M., Sain, I. and Alonso, E., 201

    Normalisation Control in Deep Inference via Atomic Flows

    Get PDF
    We introduce `atomic flows': they are graphs obtained from derivations by tracing atom occurrences and forgetting the logical structure. We study simple manipulations of atomic flows that correspond to complex reductions on derivations. This allows us to prove, for propositional logic, a new and very general normalisation theorem, which contains cut elimination as a special case. We operate in deep inference, which is more general than other syntactic paradigms, and where normalisation is more difficult to control. We argue that atomic flows are a significant technical advance for normalisation theory, because 1) the technique they support is largely independent of syntax; 2) indeed, it is largely independent of logical inference rules; 3) they constitute a powerful geometric formalism, which is more intuitive than syntax
    • …
    corecore