8,932 research outputs found
A machine that knows its own code
We construct a machine that knows its own code, at the price of not knowing
its own factivity.Comment: 7 page
A Machine Checked Model of Idempotent MGU Axioms For Lists of Equational Constraints
We present formalized proofs verifying that the first-order unification
algorithm defined over lists of satisfiable constraints generates a most
general unifier (MGU), which also happens to be idempotent. All of our proofs
have been formalized in the Coq theorem prover. Our proofs show that finite
maps produced by the unification algorithm provide a model of the axioms
characterizing idempotent MGUs of lists of constraints. The axioms that serve
as the basis for our verification are derived from a standard set by extending
them to lists of constraints. For us, constraints are equalities between terms
in the language of simple types. Substitutions are formally modeled as finite
maps using the Coq library Coq.FSets.FMapInterface. Coq's method of functional
induction is the main proof technique used in proving many of the axioms.Comment: In Proceedings UNIF 2010, arXiv:1012.455
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
Reasoning is essential for the development of large knowledge graphs,
especially for completion, which aims to infer new triples based on existing
ones. Both rules and embeddings can be used for knowledge graph reasoning and
they have their own advantages and difficulties. Rule-based reasoning is
accurate and explainable but rule learning with searching over the graph always
suffers from efficiency due to huge search space. Embedding-based reasoning is
more scalable and efficient as the reasoning is conducted via computation
between embeddings, but it has difficulty learning good representations for
sparse entities because a good embedding relies heavily on data richness. Based
on this observation, in this paper we explore how embedding and rule learning
can be combined together and complement each other's difficulties with their
advantages. We propose a novel framework IterE iteratively learning embeddings
and rules, in which rules are learned from embeddings with proper pruning
strategy and embeddings are learned from existing triples and new triples
inferred by rules. Evaluations on embedding qualities of IterE show that rules
help improve the quality of sparse entity embeddings and their link prediction
results. We also evaluate the efficiency of rule learning and quality of rules
from IterE compared with AMIE+, showing that IterE is capable of generating
high quality rules more efficiently. Experiments show that iteratively learning
embeddings and rules benefit each other during learning and prediction.Comment: This paper is accepted by WWW'1
Cut-Simulation and Impredicativity
We investigate cut-elimination and cut-simulation in impredicative
(higher-order) logics. We illustrate that adding simple axioms such as Leibniz
equations to a calculus for an impredicative logic -- in our case a sequent
calculus for classical type theory -- is like adding cut. The phenomenon
equally applies to prominent axioms like Boolean- and functional
extensionality, induction, choice, and description. This calls for the
development of calculi where these principles are built-in instead of being
treated axiomatically.Comment: 21 page
Some applications of logic to feasibility in higher types
In this paper we demonstrate that the class of basic feasible functionals has
recursion theoretic properties which naturally generalize the corresponding
properties of the class of feasible functions. We also improve the Kapron -
Cook result on mashine representation of basic feasible functionals. Our proofs
are based on essential applications of logic. We introduce a weak fragment of
second order arithmetic with second order variables ranging over functions from
N into N which suitably characterizes basic feasible functionals, and show that
it is a useful tool for investigating the properties of basic feasible
functionals. In particular, we provide an example how one can extract feasible
"programs" from mathematical proofs which use non-feasible functionals (like
second order polynomials)
Recommended from our members
Automated verification of refinement laws
Demonic refinement algebras are variants of Kleene algebras. Introduced by von Wright as a light-weight variant of the refinement calculus, their intended semantics are positively disjunctive predicate transformers, and their calculus is entirely within first-order equational logic. So, for the first time, off-the-shelf automated theorem proving (ATP) becomes available for refinement proofs. We used ATP to verify a toolkit of basic refinement laws. Based on this toolkit, we then verified two classical complex refinement laws for action systems by ATP: a data refinement law and Back's atomicity refinement law. We also present a refinement law for infinite loops that has been discovered through automated analysis. Our proof experiments not only demonstrate that refinement can effectively be automated, they also compare eleven different ATP systems and suggest that program verification with variants of Kleene algebras yields interesting theorem proving benchmarks. Finally, we apply hypothesis learning techniques that seem indispensable for automating more complex proofs
- …