124 research outputs found
Type Classes for Mathematics in Type Theory
The introduction of first-class type classes in the Coq system calls for
re-examination of the basic interfaces used for mathematical formalization in
type theory. We present a new set of type classes for mathematics and take full
advantage of their unique features to make practical a particularly flexible
approach formerly thought infeasible. Thus, we address both traditional proof
engineering challenges as well as new ones resulting from our ambition to build
upon this development a library of constructive analysis in which abstraction
penalties inhibiting efficient computation are reduced to a minimum.
The base of our development consists of type classes representing a standard
algebraic hierarchy, as well as portions of category theory and universal
algebra. On this foundation we build a set of mathematically sound abstract
interfaces for different kinds of numbers, succinctly expressed using
categorical language and universal algebra constructions. Strategic use of type
classes lets us support these high-level theory-friendly definitions while
still enabling efficient implementations unhindered by gratuitous indirection,
conversion or projection.
Algebra thrives on the interplay between syntax and semantics. The
Prolog-like abilities of type class instance resolution allow us to
conveniently define a quote function, thus facilitating the use of reflective
techniques
ACL2 Verification of Simplicial Degeneracy Programs in the Kenzo System
Kenzo is a Computer Algebra system devoted to Algebraic
Topology, and written in the Common Lisp programming language. It is
a descendant of a previous system called EAT (for Effective Algebraic
Topology). Kenzo shows a much better performance than EAT due,
among other reasons, to a smart encoding of degeneracy lists as integers.
In this paper, we give a complete automated proof of the correctness of
this encoding used in Kenzo. The proof is carried out using ACL2, a system
for proving properties of programs written in (a subset of) Common
Lisp. The most interesting idea, from a methodological point of view, is
our use of EAT to build a model on which the verification is carried out.
Thus, EAT, which is logically simpler but less efficient than Kenzo, acts
as a mathematical model and then Kenzo is formally verified against it.Ministerio de Educación y Ciencia MTM2006-0651
An Introduction to Mechanized Reasoning
Mechanized reasoning uses computers to verify proofs and to help discover new
theorems. Computer scientists have applied mechanized reasoning to economic
problems but -- to date -- this work has not yet been properly presented in
economics journals. We introduce mechanized reasoning to economists in three
ways. First, we introduce mechanized reasoning in general, describing both the
techniques and their successful applications. Second, we explain how mechanized
reasoning has been applied to economic problems, concentrating on the two
domains that have attracted the most attention: social choice theory and
auction theory. Finally, we present a detailed example of mechanized reasoning
in practice by means of a proof of Vickrey's familiar theorem on second-price
auctions
A Mechanized Proof of Kleene’s Theorem in Why3
In this dissertation we present a mathematically minded development of the correction
proof of Kleene’s theorem conversion of regular expressions into finite automata, on
the basis of equivalent expressive power. We formalise a functional implementation of
the algorithm and prove, in full detail, the soundness of its mathematical definition,
working within the Why3 framework to develop a mechanically verified implementation
of the conversion algorithm. The motivation for this work is to test the feasibility of
the deductive approach to the verification of software and pave the way to do similar
proofs in the context of a static analysis approach to (object-oriented) programming. In
particular, on the subject of behavioural types in typestate settings, whose expressiveness
stands between regular and context-free languages and, therefore, can greatly benefit
from mechanically certified implementations.Nesta dissertação apresentamos um desenvolvimento matemático da prova de correcção
da conversão de expressões regulares em autómatos finitos do teorema de Kleene,
com base no seu poder expressivo equivalente. Formalizamos uma implementação funcional
do algoritmo e provamos, em detalhe, a correcção da sua definição matemática.
Trabalhando no framework Why3 para desenvolver uma implementação mecanicamente
certificada do algoritmo de conversão. A motivação para este trabalho é testar a viabilidade
da metodologia e preparar o caminho para fazer provas semelhantes no contexto de
uma abordagem de análise estática na programação (orientada para objectos). Em particular,
no tópico dos tipos comportamentais com typestates, cuja expressividade está entre a
das linguagens regulares e livres-de-contexto. Podendo, por isso, beneficiar enormemente
de implementações mecanicamente certificada
Frex: dependently-typed algebraic simplification
We present an extensible, mathematically-structured algebraic simplification
library design. We structure the library using universal algebraic concepts: a
free algebra -- fral -- and a free extension -- frex -- of an algebra by a set
of variables. The library's dependently-typed API guarantees simplification
modules, even user-defined ones, are terminating, sound, and complete with
respect to a well-specified class of equations. Completeness offers intangible
benefits in practice -- our main contribution is the novel design. Cleanly
separating between the interface and implementation of simplification modules
provides two new modularity axes. First, simplification modules share thousands
of lines of infrastructure code dealing with term-representation,
pretty-printing, certification, and macros/reflection. Second, new
simplification modules can reuse existing ones. We demonstrate this design by
developing simplification modules for monoid varieties: ordinary, commutative,
and involutive. We implemented this design in the new Idris2 dependently-typed
programming language, and in Agda
LFTOP: An LF based approach to domain specific reasoning
Specialized vocabulary, notations and inference rules tailored for the description, analysis and reasoning of a domain is very important for the domain. For domain-specific issues researchers focus mainly on the design and implementation of domain-specific languages (DSL) and pay little attention to the reasoning aspects. We believe that domain-specific reasoning is very important to help the proofs of some properties of the domains and should be more concise, more reusable and more believable. It deserves to be investigated in an engineering way. Type theory provides good support for generic reasoning and verification. Many type theorists want to extend uses of type theory to more domains, and believe that the methods, ideas, and technology of type theory can have a beneficial effect for computer assisted reasoning in many domains. Proof assistants based on type theory are well known as effective tools to support reasoning. But these proof assistants have focused primarily on generic notations for representation of problems and are oriented towards helping expert type theorists build proofs efficiently. They are successful in this goal, but they are less suitable for use by non-specialists. In other words, one of the big barriers to limit the use of type theory and proof assistant in domain-specific areas is that it requires significant expertise to use it effectively. We present LFTOP ― a new approach to domain-specific reasoning that is based on a type-theoretic logical framework (LP) but does not require the user to be an expert in type theory. In this approach, users work on a domain-specific interface that is familiar to them. The interface presents a reasoning system of the domain through a user-oriented syntax. A middle layer provides translation between the user syntax and LF, and allows additional support for reasoning (e.g. model checking). Thus, the complexity of the logical framework is hidden but we also retain the benefits of using type theory and its related tools, such as precision and machine-checkable proofs. The approach is being investigated through a number of case studies. In each case study, the relevant domain-specific specification languages and logic are formalized in Plastic. The relevant reasoning system is designed and customized for the users of the corresponding specific domain. The corresponding lemmas are proved in Plastic. We analyze the advantages and shortcomings of this approach, define some new concepts related to the approach, especially discuss issues arising from the translation between the different levels. A prototype implementation is developed. We illustrate the approach through many concrete examples in the prototype implementation. The study of this thesis shows that the approach is feasible and promising, the relevant methods and technologies are useful and effective
Zero-one laws with respect to models of provability logic and two Grzegorczyk logics
It has been shown in the late 1960s that each formula of first-order logic without constants and function symbols obeys a zero-one law: As the number of elements of finite models increases, every formula holds either in almost all or in almost no models of that size. Therefore, many properties of models, such as having an even number of elements, cannot be expressed in the language of first-order logic. Halpern and Kapron proved zero-one laws for classes of models corresponding to the modal logics K, T, S4, and S5 and for frames corresponding to S4 and S5. In this paper, we prove zero-one laws for provability logic and its two siblings Grzegorczyk logic and weak Grzegorczyk logic, with respect to model validity. Moreover, we axiomatize validity in almost all relevant finite models, leading to three different axiom systems
- …