79 research outputs found
A Comparative Study of Coq and HOL
This paper illustrates the differences between the style of theory mechanisation of Coq and of HOL. This comparative study is based on the mechanisation of fragments of the theory of computation in these systems. Examples from these implementations are given to support some of the arguments discussed in this paper. The mechanisms for specifying definitions and for theorem proving are discussed separately, building in parallel two pictures of the different approaches of mechanisation given by these systems
The Tactician (extended version): A Seamless, Interactive Tactic Learner and Prover for Coq
We present Tactician, a tactic learner and prover for the Coq Proof
Assistant. Tactician helps users make tactical proof decisions while they
retain control over the general proof strategy. To this end, Tactician learns
from previously written tactic scripts and gives users either suggestions about
the next tactic to be executed or altogether takes over the burden of proof
synthesis. Tactician's goal is to provide users with a seamless, interactive,
and intuitive experience together with robust and adaptive proof automation. In
this paper, we give an overview of Tactician from the user's point of view,
regarding both day-to-day usage and issues of package dependency management
while learning in the large. Finally, we give a peek into Tactician's
implementation as a Coq plugin and machine learning platform.Comment: 19 pages, 2 figures. This is an extended version of a paper published
in CICM-2020. For the project website, see https://coq-tactician.github.i
A New Elimination Rule for the Calculus of Inductive Constructions
Published in the post-proceedings of TYPES but actually not presented orally to the conferenceInternational audienceIn Type Theory, definition by dependently-typed case analysis can be expressed by means of a set of equations — the semantic approach — or by an explicit pattern-matching construction — the syntactic approach. We aim at putting together the best of both approaches by extending the pattern-matching construction found in the Coq proof assistant in order to obtain the expressivity and flexibility of equation-based case analysis while remaining in a syntax-based setting, thus making dependently-typed programming more tractable in the Coq system. We provide a new rule that permits the omission of impossible cases, handles the propagation of inversion constraints, and allows to derive Streicher's K axiom. We show that subject reduction holds, and sketch a proof of relative consistency
Algebraic totality, towards completeness
Finiteness spaces constitute a categorical model of Linear Logic (LL) whose
objects can be seen as linearly topologised spaces, (a class of topological
vector spaces introduced by Lefschetz in 1942) and morphisms as continuous
linear maps. First, we recall definitions of finiteness spaces and describe
their basic properties deduced from the general theory of linearly topologised
spaces. Then we give an interpretation of LL based on linear algebra. Second,
thanks to separation properties, we can introduce an algebraic notion of
totality candidate in the framework of linearly topologised spaces: a totality
candidate is a closed affine subspace which does not contain 0. We show that
finiteness spaces with totality candidates constitute a model of classical LL.
Finally, we give a barycentric simply typed lambda-calculus, with booleans
and a conditional operator, which can be interpreted in this
model. We prove completeness at type for
every n by an algebraic method
Higher-Order Termination: from Kruskal to Computability
Termination is a major question in both logic and computer science. In logic,
termination is at the heart of proof theory where it is usually called strong
normalization (of cut elimination). In computer science, termination has always
been an important issue for showing programs correct. In the early days of
logic, strong normalization was usually shown by assigning ordinals to
expressions in such a way that eliminating a cut would yield an expression with
a smaller ordinal. In the early days of verification, computer scientists used
similar ideas, interpreting the arguments of a program call by a natural
number, such as their size. Showing the size of the arguments to decrease for
each recursive call gives a termination proof of the program, which is however
rather weak since it can only yield quite small ordinals. In the sixties, Tait
invented a new method for showing cut elimination of natural deduction, based
on a predicate over the set of terms, such that the membership of an expression
to the predicate implied the strong normalization property for that expression.
The predicate being defined by induction on types, or even as a fixpoint, this
method could yield much larger ordinals. Later generalized by Girard under the
name of reducibility or computability candidates, it showed very effective in
proving the strong normalization property of typed lambda-calculi..
The play's the thing
For very understandable reasons phenomenological approaches predominate in the field of sensory urbanism. This paper does not seek to add to that particular discourse. Rather it takes Rorty’s postmodernized Pragmatism as its starting point and develops a position on the role of multi-modal design representation in the design process as a means of admitting many voices and managing multidisciplinary collaboration.
This paper will interrogate some of the concepts underpinning the Sensory Urbanism project to help define the scope of interest in multi-modal representations. It will then explore a range of techniques and approaches developed by artists and designers during the past fifty years or so and comment on how they might inform the question of multi-modal representation. In conclusion I will argue that we should develop a heterogeneous tool kit that adopts, adapts and re-invents existing methods because this will better serve our purposes during the exploratory phase(s) of any design project that deals with complexity
Correct-by-construction implementation of runtime monitors using stepwise refinement
Runtime verification (RV) is a lightweight technique for verifying traces of computer systems. One challenge in applying RV is to guarantee that the implementation of a runtime monitor correctly detects and signals unexpected events. In this paper, we present a method for deriving correct-by-construction implementations of runtime monitors from high-level specifications using Fiat, a Coq library for stepwise refinement. SMEDL (Scenario-based Meta-Event Definition Language), a domain specific language for event-driven RV, is chosen as the specification language. We propose an operational semantics for SMEDL suitable to be used in Fiat to describe the behavior of a monitor in a relational way. Then, by utilizing Fiat\u27s refinement calculus, we transform a declarative monitor specification into an executable runtime monitor with a proof that the behavior of the implementation is strictly a subset of that provided by the specification. Moreover, we define a predicate on the syntax structure of a monitor definition to ensure termination and determinism. Most of the proof work required to generate monitor code has been automated
- …