4,527 research outputs found

    Designing Normative Theories for Ethical and Legal Reasoning: LogiKEy Framework, Methodology, and Tool Support

    Full text link
    A framework and methodology---termed LogiKEy---for the design and engineering of ethical reasoners, normative theories and deontic logics is presented. The overall motivation is the development of suitable means for the control and governance of intelligent autonomous systems. LogiKEy's unifying formal framework is based on semantical embeddings of deontic logics, logic combinations and ethico-legal domain theories in expressive classic higher-order logic (HOL). This meta-logical approach enables the provision of powerful tool support in LogiKEy: off-the-shelf theorem provers and model finders for HOL are assisting the LogiKEy designer of ethical intelligent agents to flexibly experiment with underlying logics and their combinations, with ethico-legal domain theories, and with concrete examples---all at the same time. Continuous improvements of these off-the-shelf provers, without further ado, leverage the reasoning performance in LogiKEy. Case studies, in which the LogiKEy framework and methodology has been applied and tested, give evidence that HOL's undecidability often does not hinder efficient experimentation.Comment: 50 pages; 10 figure

    Computing with Coloured Tangles

    Full text link
    We suggest a diagrammatic model of computation based on an axiom of distributivity. A diagram of a decorated coloured tangle, similar to those that appear in low dimensional topology, plays the role of a circuit diagram. Equivalent diagrams represent bisimilar computations. We prove that our model of computation is Turing complete, and that with bounded resources it can moreover decide any language in complexity class IP, sometimes with better performance parameters than corresponding classical protocols.Comment: 36 pages,; Introduction entirely rewritten, Section 4.3 adde

    Modeling Life as Cognitive Info-Computation

    Full text link
    This article presents a naturalist approach to cognition understood as a network of info-computational, autopoietic processes in living systems. It provides a conceptual framework for the unified view of cognition as evolved from the simplest to the most complex organisms, based on new empirical and theoretical results. It addresses three fundamental questions: what cognition is, how cognition works and what cognition does at different levels of complexity of living organisms. By explicating the info-computational character of cognition, its evolution, agent-dependency and generative mechanisms we can better understand its life-sustaining and life-propagating role. The info-computational approach contributes to rethinking cognition as a process of natural computation in living beings that can be applied for cognitive computation in artificial systems.Comment: Manuscript submitted to Computability in Europe CiE 201

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    Computer supported mathematics with Ī©mega

    Get PDF
    AbstractClassical automated theorem proving of today is based on ingenious search techniques to find a proof for a given theorem in very large search spacesā€”often in the range of several billion clauses. But in spite of many successful attempts to prove even open mathematical problems automatically, their use in everyday mathematical practice is still limited.The shift from search based methods to more abstract planning techniques however opened up a paradigm for mathematical reasoning on a computer and several systems of that kind now employ a mix of interactive, search based as well as proof planning techniques.The Ī©mega system is at the core of several related and well-integrated research projects of the Ī©mega research group, whose aim is to develop system support for a working mathematician as well as a software engineer when employing formal methods for quality assurance. In particular, Ī©mega supports proof development at a human-oriented abstract level of proof granularity. It is a modular system with a central proof data structure and several supplementary subsystems including automated deduction and computer algebra systems. Ī©mega has many characteristics in common with systems like NuPrL, CoQ, Hol, Pvs, and Isabelle. However, it differs from these systems with respect to its focus on proof planning and in that respect it is more similar to the proof planning systems Clam and Ī»Clam at Edinburgh
    • ā€¦
    corecore