93 research outputs found
Mechanising Euler's use of infinitesimals in the proof of the Basel problem
In 1736 Euler published a proof of an astounding relation between π and the reciprocals of the squares.
π²/6 = 1+ 1/4+ 1/9 + 1/25 …
Until this point, π had not been part of any mathematical relation outside of geometry. This relation would have had an almost supernatural significance to the mathematicians of the time. But even more amazing is Euler's proof. He factorises a transcendental function as if it were a polynomial of infinite degree. He discards infinitely-many infinitely-small numbers. He substitutes 1 for the ratio of two distinct infinite numbers.
Nowadays Euler's proof is held up as an example of both genius intuition and flagrantly unrigorous method. In this thesis we describe how, with the aid of nonstandard analysis, which gives a consistent formal theory of infinitely-small and large numbers, and the proof assistant Isabelle, we construct a partial formal proof of the Basel problem which follows the method of Euler's proof from his 'Introductio in Analysin Infinitorum'. We use our proof to demonstrate that Euler was systematic in his use of infinitely-large and infinitely-small numbers and did not make unjustified leaps of intuition. The concept of 'hidden lemmas' was developed by McKinzie and Tuckey based on Lakatos and Laugwitz to represent general principles which Euler's proof followed. We develop a theory of infinite 'hyperpolynomials' in Isabelle in order to formalise these hidden lemmas and we find that formal reconstruction of his proof using hidden lemmas is an effective way to discover the nuances in Euler's reasoning and demystify the controversial points. In conclusion, we find that Euler's reasoning was consistent and insightful, and yet has some distinct methodology to modern deductive proof
Verification using formalised mathematics and theorem proving of reinforcement and deep learning
In modern artificial intelligence research, frequently there is little emphasis on mathe-
matical certainty; results are often shown by experimentation, and understanding pre-
cisely why a particular method works, or the guarantees that they will be effective, is
often constrained to speculation and discussion.
Formal mathematics via theorem proving brings a precision of explanation and
certainty that can be missing in this field. We present work that applies the benefits
of formal mathematics to two different fields of artificial intelligence, in two different
ways.
Using the Isabelle theorem prover, we formalise Markov Decision Processes (MDPs)
with rewards, fundamental to reinforcement learning, and use this as the basis for a
formalisation of Q learning, a significant reinforcement learning algorithm. Q learning
attempts to learn the reward function of an unknown MDP by estimation, correcting its estimates as it navigates the MDP repeatedly. We also formalise the Dvoretzky Stochastic Approximation theorem, a result fundamental to many stochastic processes.
It is especially relevant to our work as it is necessary to prove that (given certain assumptions) the estimates of the Q learning algorithm converge to the true values of the reward function.
Secondly, we use theorem proving to integrate a formalised logical system with
deep learning, into a neurosymbolic process. We formalise Linear Temporal Logic
over finite paths (LTLf), and develop a loss function (and its derivative) over it that
returns a real value corresponding to the satisfaction of a given LTLf constraint over a
given path. We prove that this is sound with respect to the semantics of LTLf. We use
the code generation capabilities of Isabelle to then integrate this into a PyTorch deep
learning process designed to learn trajectories. Lastly, we demonstrate experimentally
that we can use the resulting neurosymbolic process to learn using LTLf constraints
on the trajectories as well as by imitation of a demonstrator
Verification using formalised mathematics and theorem proving of reinforcement and deep learning
In modern artificial intelligence research, frequently there is little emphasis on mathe-
matical certainty; results are often shown by experimentation, and understanding pre-
cisely why a particular method works, or the guarantees that they will be effective, is
often constrained to speculation and discussion.
Formal mathematics via theorem proving brings a precision of explanation and
certainty that can be missing in this field. We present work that applies the benefits
of formal mathematics to two different fields of artificial intelligence, in two different
ways.
Using the Isabelle theorem prover, we formalise Markov Decision Processes (MDPs)
with rewards, fundamental to reinforcement learning, and use this as the basis for a
formalisation of Q learning, a significant reinforcement learning algorithm. Q learning
attempts to learn the reward function of an unknown MDP by estimation, correcting its estimates as it navigates the MDP repeatedly. We also formalise the Dvoretzky Stochastic Approximation theorem, a result fundamental to many stochastic processes.
It is especially relevant to our work as it is necessary to prove that (given certain assumptions) the estimates of the Q learning algorithm converge to the true values of the reward function.
Secondly, we use theorem proving to integrate a formalised logical system with
deep learning, into a neurosymbolic process. We formalise Linear Temporal Logic
over finite paths (LTLf), and develop a loss function (and its derivative) over it that
returns a real value corresponding to the satisfaction of a given LTLf constraint over a
given path. We prove that this is sound with respect to the semantics of LTLf. We use
the code generation capabilities of Isabelle to then integrate this into a PyTorch deep
learning process designed to learn trajectories. Lastly, we demonstrate experimentally
that we can use the resulting neurosymbolic process to learn using LTLf constraints
on the trajectories as well as by imitation of a demonstrator
Hilbert's Tenth Problem in Coq (Extended Version)
We formalise the undecidability of solvability of Diophantine equations, i.e.
polynomial equations over natural numbers, in Coq's constructive type theory.
To do so, we give the first full mechanisation of the
Davis-Putnam-Robinson-Matiyasevich theorem, stating that every recursively
enumerable problem -- in our case by a Minsky machine -- is Diophantine. We
obtain an elegant and comprehensible proof by using a synthetic approach to
computability and by introducing Conway's FRACTRAN language as intermediate
layer. Additionally, we prove the reverse direction and show that every
Diophantine relation is recognisable by -recursive functions and give a
certified compiler from -recursive functions to Minsky machines.Comment: submitted to LMC
Intuition in formal proof : a novel framework for combining mathematical tools
This doctoral thesis addresses one major difficulty in formal proof: removing obstructions
to intuition which hamper the proof endeavour. We investigate this in the context
of formally verifying geometric algorithms using the theorem prover Isabelle, by first
proving the Graham’s Scan algorithm for finding convex hulls, then using the challenges
we encountered as motivations for the design of a general, modular framework
for combining mathematical tools.
We introduce our integration framework — the Prover’s Palette, describing in detail
the guiding principles from software engineering and the key differentiator of our
approach — emphasising the role of the user. Two integrations are described, using
the framework to extend Eclipse Proof General so that the computer algebra systems
QEPCAD and Maple are directly available in an Isabelle proof context, capable of running
either fully automated or with user customisation. The versatility of the approach
is illustrated by showing a variety of ways that these tools can be used to streamline the
theorem proving process, enriching the user’s intuition rather than disrupting it. The
usefulness of our approach is then demonstrated through the formal verification of an
algorithm for computing Delaunay triangulations in the Prover’s Palette
Extracting proofs from documents
Often, theorem checkers like PVS are used to check an existing proof, which is part of some document. Since there is a large difference between the notations used in the documents and the notations used in the theorem checkers, it is usually a laborious task to convert an existing proof into a format which can be checked by a machine. In the system that we propose, the author is assisted in the process of converting an existing proof into the PVS language and having it checked by PVS. 1 Introduction The now-classic ALGOL 60 report [5] recognized three different levels of language: a reference language, a publication language and several hardware representations, whereby the publication language was intended to admit variations on the reference language and was to be used for stating and communicating processes. The importance of publication language ---often referred to nowadays as "pseudo-code"--- is difficult to exaggerate since a publication language is the most effective way..
- …