37 research outputs found
Colored E-Graph: Equality Reasoning with Conditions
E-graphs are a prominent data structure that has been increasing in
popularity in recent years due to their expanding range of applications in
various formal reasoning tasks. Often, they are used for equality saturation, a
process of deriving consequences through repeatedly applying universally
quantified equality formulas via term rewriting. They handle equality reasoning
over a large spaces of terms, but are severely limited in their handling of
case splitting and other types of logical cuts, especially when compared to
other reasoning techniques such as sequent calculi and resolution. The main
difficulty is when equality reasoning requires multiple inconsistent
assumptions to reach a single conclusion. Ad-hoc solutions, such as duplicating
the e-graph for each assumption, are available, but they are notably
resource-intensive.
We introduce a key observation is that each duplicate e-graph (with an added
assumption) corresponds to coarsened congruence relation. Based on that, we
present an extension to e-graphs, called Colored E-Graphs, as a way to
represent all of the coarsened congruence relations in a single structure. A
colored e-graph is a memory-efficient equivalent of multiple copies of an
e-graph, with a much lower overhead. This is attained by sharing as much as
possible between different cases, while carefully tracking which conclusion is
true under which assumption. Support for multiple relations can be thought of
as adding multiple "color-coded" layers on top of the original e-graph
structure, leading to a large degree of sharing.
In our implementation, we introduce optimizations to rebuilding and
e-matching. We run experiments and demonstrate that our colored e-graphs can
support hundreds of assumptions and millions of terms with space requirements
that are an order of magnitude lower, and with similar time requirements
Meta-interpretive learning of proof strategies
In modern mathematics, mechanised theorem proving software is playing an ever increasing role. By enlisting the help of computers mathematicians are able to
formally prove more complex results than they perhaps otherwise could, however
those computers are still incapable of drawing many of the conclusions which would
be obvious to a human user and so human intervention is still required.
In this thesis we consider the use of an adapted machine learning technique to
begin addressing this issue. We consider the use of proof strategies to provide a
high-level view of how a proof is structured, including information about why a
particular step was taken. We extend the Metagol meta-interpretive learning tool
to facilitate learning these strategies. We begin with a small set of examples and
refine our approach, demonstrating the improvements experimentally. We go on to
discuss the learning of more complicated strategies, some of the issues faced in doing
so and how we could address them. We conclude by evaluating the experiments as
a whole, identifying the weak points in our approach and suggesting ways in which
they can be addressed in future work
Meta-interpretive learning of proof strategies
In modern mathematics, mechanised theorem proving software is playing an ever-increasing role. By enlisting the help of computers mathematicians are able to
formally prove more complex results than they perhaps otherwise could, however
those computers are still incapable of drawing many of the conclusions which would
be obvious to a human user and so human intervention is still required.
In this thesis we consider the use of an adapted machine learning technique to
begin addressing this issue. We consider the use of proof strategies to provide a
high-level view of how a proof is structured, including information about why a
particular step was taken. We extend the Metagol meta-interpretive learning tool
to facilitate learning these strategies. We begin with a small set of examples and
refine our approach, demonstrating the improvements experimentally. We go on to
discuss the learning of more complicated strategies, some of the issues faced in doing
so and how we could address them. We conclude by evaluating the experiments as
a whole, identifying the weak points in our approach and suggesting ways in which
they can be addressed in future work
Reversible Computation: Extending Horizons of Computing
This open access State-of-the-Art Survey presents the main recent scientific outcomes in the area of reversible computation, focusing on those that have emerged during COST Action IC1405 "Reversible Computation - Extending Horizons of Computing", a European research network that operated from May 2015 to April 2019. Reversible computation is a new paradigm that extends the traditional forwards-only mode of computation with the ability to execute in reverse, so that computation can run backwards as easily and naturally as forwards. It aims to deliver novel computing devices and software, and to enhance existing systems by equipping them with reversibility. There are many potential applications of reversible computation, including languages and software tools for reliable and recovery-oriented distributed systems and revolutionary reversible logic gates and circuits, but they can only be realized and have lasting effect if conceptual and firm theoretical foundations are established first
Machine learning for inductive theorem proving
Over the past few years, machine learning has been successfully combined with automated
theorem provers (ATPs) to prove conjectures from various proof assistants.
However, such approaches do not usually focus on inductive proofs. In this work, we
explore a combination of machine learning, a simple Boyer-Moore model and ATPs as
a means of improving the automation of inductive proofs in HOL Light. We evaluate
the framework using a number of inductive proof corpora. In each case, our approach
achieves a higher success rate than running ATPs or the Boyer-Moore tool individually.
An attempt to add the support for non-recursive type to the Boyer-Moore waterfall
model is made by looking at proof automation for finite sets. We also test the framework
in a program verification setting by looking at proofs about sorting algorithms in
Hoare Logic
From LCF to Isabelle/HOL
Interactive theorem provers have developed dramatically over the past four
decades, from primitive beginnings to today's powerful systems. Here, we focus
on Isabelle/HOL and its distinctive strengths. They include automatic proof
search, borrowing techniques from the world of first order theorem proving, but
also the automatic search for counterexamples. They include a highly readable
structured language of proofs and a unique interactive development environment
for editing live proof documents. Everything rests on the foundation conceived
by Robin Milner for Edinburgh LCF: a proof kernel, using abstract types to
ensure soundness and eliminate the need to store proofs. Compared with the
research prototypes of the 1970s, Isabelle is a practical and versatile tool.
It is used by system designers, mathematicians and many others
Synthesis of Recursive ADT Transformations from Reusable Templates
Recent work has proposed a promising approach to improving scalability of
program synthesis by allowing the user to supply a syntactic template that
constrains the space of potential programs. Unfortunately, creating templates
often requires nontrivial effort from the user, which impedes the usability of
the synthesizer. We present a solution to this problem in the context of
recursive transformations on algebraic data-types. Our approach relies on
polymorphic synthesis constructs: a small but powerful extension to the
language of syntactic templates, which makes it possible to define a program
space in a concise and highly reusable manner, while at the same time retains
the scalability benefits of conventional templates. This approach enables
end-users to reuse predefined templates from a library for a wide variety of
problems with little effort. The paper also describes a novel optimization that
further improves the performance and scalability of the system. We evaluated
the approach on a set of benchmarks that most notably includes desugaring
functions for lambda calculus, which force the synthesizer to discover Church
encodings for pairs and boolean operations
On Computational Small Steps and Big Steps: Refocusing for Outermost Reduction
We study the relationship between small-step semantics, big-step semantics and abstract machines, for programming languages that employ an outermost reduction strategy, i.e., languages where reductions near the root of the abstract syntax tree are performed before reductions near the leaves.In particular, we investigate how Biernacka and Danvy's syntactic correspondence and Reynolds's functional correspondence can be applied to inter-derive semantic specifications for such languages.The main contribution of this dissertation is three-fold:First, we identify that backward overlapping reduction rules in the small-step semantics cause the refocusing step of the syntactic correspondence to be inapplicable.Second, we propose two solutions to overcome this in-applicability: backtracking and rule generalization.Third, we show how these solutions affect the other transformations of the two correspondences.Other contributions include the application of the syntactic and functional correspondences to Boolean normalization.In particular, we show how to systematically derive a spectrum of normalization functions for negational and conjunctive normalization