935 research outputs found
Initial Experiments with TPTP-style Automated Theorem Provers on ACL2 Problems
This paper reports our initial experiments with using external ATP on some
corpora built with the ACL2 system. This is intended to provide the first
estimate about the usefulness of such external reasoning and AI systems for
solving ACL2 problems.Comment: In Proceedings ACL2 2014, arXiv:1406.123
Inductive logic programming at 30: a new introduction
Inductive logic programming (ILP) is a form of machine learning. The goal of
ILP is to induce a hypothesis (a set of logical rules) that generalises
training examples. As ILP turns 30, we provide a new introduction to the field.
We introduce the necessary logical notation and the main learning settings;
describe the building blocks of an ILP system; compare several systems on
several dimensions; describe four systems (Aleph, TILDE, ASPAL, and Metagol);
highlight key application areas; and, finally, summarise current limitations
and directions for future research.Comment: Paper under revie
Supervising Offline Partial Evaluation of Logic Programs using Online Techniques
A major impediment for more widespread use of offline partial evaluation is the difficulty of obtaining and maintaining annotations for larger, realistic programs. Existing automatic binding-time analyses still only have limited applicability and annotations often have to be created or improved and maintained by hand, leading to errors. We present a technique to help overcome this problem by using online control techniques which supervise the specialisation process in order to help the development and maintenance of correct annotations by identifying errors. We discuss an implementation in the Logen system and show on a series of examples that this approach is effective: very few false alarms were raised while infinite loops were detected quickly. We also present the integration of this technique into a web interface, which highlights problematic annotations directly in the source code. A method to automatically fix incorrect annotations is presented, allowing the approach to be also used as a pragmatic binding time analysis. Finally we show how our method can be used for efficiently locating built-in errors in Prolog source code
Inductive logic programming at 30
Inductive logic programming (ILP) is a form of logic-based machine learning.
The goal of ILP is to induce a hypothesis (a logic program) that generalises
given training examples and background knowledge. As ILP turns 30, we survey
recent work in the field. In this survey, we focus on (i) new meta-level search
methods, (ii) techniques for learning recursive programs that generalise from
few examples, (iii) new approaches for predicate invention, and (iv) the use of
different technologies, notably answer set programming and neural networks. We
conclude by discussing some of the current limitations of ILP and discuss
directions for future research.Comment: Extension of IJCAI20 survey paper. arXiv admin note: substantial text
overlap with arXiv:2002.11002, arXiv:2008.0791
Automated theory formation in pure mathematics
The automation of specific mathematical tasks such as theorem proving and algebraic
manipulation have been much researched. However, there have only been a few isolated
attempts to automate the whole theory formation process. Such a process involves
forming new concepts, performing calculations, making conjectures, proving theorems
and finding counterexamples. Previous programs which perform theory formation are
limited in their functionality and their generality. We introduce the HR program
which implements a new model for theory formation. This model involves a cycle of
mathematical activity, whereby concepts are formed, conjectures about the concepts
are made and attempts to settle the conjectures are undertaken.HR has seven general production rules for producing a new concept from old ones and
employs a best first search by building new concepts from the most interesting old
ones. To enable this, HR has various measures which estimate the interestingness of a
concept. During concept formation, HR uses empirical evidence to suggest conjectures
and employs the Otter theorem prover to attempt to prove a given conjecture. If this
fails, HR will invoke the MACE model generator to attempt to disprove the conjecture
by finding a counterexample. Information and new knowledge arising from the attempt
to settle a conjecture is used to assess the concepts involved in the conjecture, which
fuels the heuristic search and closes the cycle.The main aim of the project has been to develop our model of theory formation and
to implement this in HR. To describe the project in the thesis, we first motivate
the problem of automated theory formation and survey the literature in this area.
We then discuss how HR invents concepts, makes and settles conjectures and how
it assesses the concepts and conjectures to facilitate a heuristic search. We present
results to evaluate HR in terms of the quality of the theories it produces and the
effectiveness of its techniques. A secondary aim of the project has been to apply HR to
mathematical discovery and we discuss how HR has successfully invented new concepts
and conjectures in number theory
- ā¦