14,010 research outputs found
Extending the Calculus of Constructions with Tarski's fix-point theorem
We propose to use Tarski's least fixpoint theorem as a basis to define
recursive functions in the calculus of inductive constructions. This widens the
class of functions that can be modeled in type-theory based theorem proving
tool to potentially non-terminating functions. This is only possible if we
extend the logical framework by adding the axioms that correspond to classical
logic. We claim that the extended framework makes it possible to reason about
terminating and non-terminating computations and we show that common facilities
of the calculus of inductive construction, like program extraction can be
extended to also handle the new functions
Abstract Canonical Inference
An abstract framework of canonical inference is used to explore how different
proof orderings induce different variants of saturation and completeness.
Notions like completion, paramodulation, saturation, redundancy elimination,
and rewrite-system reduction are connected to proof orderings. Fairness of
deductive mechanisms is defined in terms of proof orderings, distinguishing
between (ordinary) "fairness," which yields completeness, and "uniform
fairness," which yields saturation.Comment: 28 pages, no figures, to appear in ACM Trans. on Computational Logi
Intensional and Extensional Semantics of Bounded and Unbounded Nondeterminism
We give extensional and intensional characterizations of nondeterministic
functional programs: as structure preserving functions between biorders, and as
nondeterministic sequential algorithms on ordered concrete data structures
which compute them. A fundamental result establishes that the extensional and
intensional representations of non-deterministic programs are equivalent, by
showing how to construct a unique sequential algorithm which computes a given
monotone and stable function, and describing the conditions on sequential
algorithms which correspond to continuity with respect to each order.
We illustrate by defining may and must-testing denotational semantics for a
sequential functional language with bounded and unbounded choice operators. We
prove that these are computationally adequate, despite the non-continuity of
the must-testing semantics of unbounded nondeterminism. In the bounded case, we
prove that our continuous models are fully abstract with respect to may and
must-testing by identifying a simple universal type, which may also form the
basis for models of the untyped lambda-calculus. In the unbounded case we
observe that our model contains computable functions which are not denoted by
terms, by identifying a further "weak continuity" property of the definable
elements, and use this to establish that it is not fully abstract
A Categorical View on Algebraic Lattices in Formal Concept Analysis
Formal concept analysis has grown from a new branch of the mathematical field
of lattice theory to a widely recognized tool in Computer Science and
elsewhere. In order to fully benefit from this theory, we believe that it can
be enriched with notions such as approximation by computation or
representability. The latter are commonly studied in denotational semantics and
domain theory and captured most prominently by the notion of algebraicity, e.g.
of lattices. In this paper, we explore the notion of algebraicity in formal
concept analysis from a category-theoretical perspective. To this end, we build
on the the notion of approximable concept with a suitable category and show
that the latter is equivalent to the category of algebraic lattices. At the
same time, the paper provides a relatively comprehensive account of the
representation theory of algebraic lattices in the framework of Stone duality,
relating well-known structures such as Scott information systems with further
formalisms from logic, topology, domains and lattice theory.Comment: 36 page
HoCHC: A Refutationally Complete and Semantically Invariant System of Higher-order Logic Modulo Theories
We present a simple resolution proof system for higher-order constrained Horn
clauses (HoCHC) - a system of higher-order logic modulo theories - and prove
its soundness and refutational completeness w.r.t. the standard semantics. As
corollaries, we obtain the compactness theorem and semi-decidability of HoCHC
for semi-decidable background theories, and we prove that HoCHC satisfies a
canonical model property. Moreover a variant of the well-known translation from
higher-order to 1st-order logic is shown to be sound and complete for HoCHC in
standard semantics. We illustrate how to transfer decidability results for
(fragments of) 1st-order logic modulo theories to our higher-order setting,
using as example the Bernays-Schonfinkel-Ramsey fragment of HoCHC modulo a
restricted form of Linear Integer Arithmetic
Changing a semantics: opportunism or courage?
The generalized models for higher-order logics introduced by Leon Henkin, and
their multiple offspring over the years, have become a standard tool in many
areas of logic. Even so, discussion has persisted about their technical status,
and perhaps even their conceptual legitimacy. This paper gives a systematic
view of generalized model techniques, discusses what they mean in mathematical
and philosophical terms, and presents a few technical themes and results about
their role in algebraic representation, calibrating provability, lowering
complexity, understanding fixed-point logics, and achieving set-theoretic
absoluteness. We also show how thinking about Henkin's approach to semantics of
logical systems in this generality can yield new results, dispelling the
impression of adhocness. This paper is dedicated to Leon Henkin, a deep
logician who has changed the way we all work, while also being an always open,
modest, and encouraging colleague and friend.Comment: 27 pages. To appear in: The life and work of Leon Henkin: Essays on
his contributions (Studies in Universal Logic) eds: Manzano, M., Sain, I. and
Alonso, E., 201
- …