352 research outputs found
Mechanizing Refinement Types (extended)
Practical checkers based on refinement types use the combination of implicit
semantic sub-typing and parametric polymorphism to simplify the specification
and automate the verification of sophisticated properties of programs. However,
a formal meta-theoretic accounting of the soundness of refinement type systems
using this combination has proved elusive. We present \lambda_RF a core
refinement calculus that combines semantic sub-typing and parametric
polymorphism. We develop a meta-theory for this calculus and prove soundness of
the type system. Finally, we give a full mechanization of our meta-theory using
the refinement-type based LiquidHaskell as a proof checker, showing how
refinements can be used for mechanization.Comment: 32 pages, under revie
Constructive approaches to Program Induction
Search is a key technique in artificial intelligence, machine learning and Program Induction. No
matter how efficient a search procedure, there exist spaces that are too large to search effectively
and they include the search space of programs. In this dissertation we show that in the context
of logic-program induction (Inductive Logic Programming, or ILP) it is not necessary to search
for a correct program, because if one exists, there also exists a unique object that is the most
general correct program, and that can be constructed directly, without a search, in polynomial
time and from a polynomial number of examples. The existence of this unique object, that we
term the Top Program because of its maximal generality, does not so much solve the problem
of searching a large program search space, as it completely sidesteps it, thus improving the
efficiency of the learning task by orders of magnitude commensurate with the complexity of a
program space search.
The existence of a unique Top Program and the ability to construct it given finite resources
relies on the imposition, on the language of hypotheses, from which programs are constructed,
of a strong inductive bias with relevance to the learning task. In common practice, in machine
learning, Program Induction and ILP, such relevant inductive bias is selected, or created,
manually, by the human user of a learning system, with intuition or knowledge of the problem
domain, and in the form of various kinds of program templates. In this dissertation we show
that by abandoning the reliance on such extra-logical devices as program templates, and instead
defining inductive bias exclusively as First- and Higher-Order Logic formulae, it is possible to
learn inductive bias itself from examples, automatically, and efficiently, by Higher-Order Top
Program construction.
In Chapter 4 we describe the Top Program in the context of the Meta-Interpretive Learning
approach to ILP (MIL) and describe an algorithm for its construction, the Top Program
Construction algorithm (TPC). We prove the efficiency and accuracy of TPC and describe
its implementation in a new MIL system called Louise. We support theoretical results with
experiments comparing Louise to the state-of-the-art, search-based MIL system, Metagol, and
find that Louise improves Metagol’s efficiency and accuracy. In Chapter 5 we re-frame MIL as
specialisation of metarules, Second-Order clauses used as inductive bias in MIL, and prove that
problem-specific metarules can be derived by specialisation of maximally general metarules, by
MIL. We describe a sub-system of Louise, called TOIL, that learns new metarules by MIL and
demonstrate empirically that the metarules learned by TOIL match those selected manually,
while maintaining the accuracy and efficiency of learning.
iOpen Acces
Scoped Capabilities for Polymorphic Effects
Type systems usually characterize the shape of values but not their free
variables. However, many desirable safety properties could be guaranteed if one
knew the free variables captured by values. We describe CCsubBox, a calculus
where such captured variables are succinctly represented in types, and show it
can be used to safely implement effects and effect polymorphism via scoped
capabilities. We discuss how the decision to track captured variables guides
key aspects of the calculus, and show that CCsubBox admits simple and intuitive
types for common data structures and their typical usage patterns. We
demonstrate how these ideas can be used to guide the implementation of capture
checking in a practical programming language.Comment: 39 page
Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited
Since the late 1990s predicate invention has been under-explored within inductive logic programming due to difficulties in formulating efficient search mechanisms. However, a recent paper demonstrated that both predicate invention and the learning of recursion can be efficiently implemented for regular and context-free grammars, by way of metalogical substitutions with respect to a modified Prolog meta-interpreter which acts as the learning engine. New predicate symbols are introduced as constants representing existentially quantified higher-order variables. The approach demonstrates that predicate invention can be treated as a form of higher-order logical reasoning. In this paper we generalise the approach of meta-interpretive learning (MIL) to that of learning higher-order dyadic datalog programs. We show that with an infinite signature the higher-order dyadic datalog class H2 2 has universal Turing expressivity though H2 2 is decidable given a finite signature. Additionally we show that Knuth–Bendix ordering of the hypothesis space together with logarithmic clause bounding allows our MIL implementation MetagolD to PAC-learn minimal cardinality H2 2 definitions. This result is consistent with our experiments which indicate that MetagolD efficiently learns compact H2 2 definitions involving predicate invention for learning robotic strategies, the East–West train challenge and NELL. Additionally higher-order concepts were learned in the NELL language learning domain. The Metagol code and datasets described in this paper have been made publicly available on a website to allow reproduction of results in this paper
Inductive logic programming at 30
Inductive logic programming (ILP) is a form of logic-based machine learning.
The goal of ILP is to induce a hypothesis (a logic program) that generalises
given training examples and background knowledge. As ILP turns 30, we survey
recent work in the field. In this survey, we focus on (i) new meta-level search
methods, (ii) techniques for learning recursive programs that generalise from
few examples, (iii) new approaches for predicate invention, and (iv) the use of
different technologies, notably answer set programming and neural networks. We
conclude by discussing some of the current limitations of ILP and discuss
directions for future research.Comment: Extension of IJCAI20 survey paper. arXiv admin note: substantial text
overlap with arXiv:2002.11002, arXiv:2008.0791
- …