140 research outputs found
Automating Change of Representation for Proofs in Discrete Mathematics (Extended Version)
Representation determines how we can reason about a specific problem.
Sometimes one representation helps us find a proof more easily than others.
Most current automated reasoning tools focus on reasoning within one
representation. There is, therefore, a need for the development of better tools
to mechanise and automate formal and logically sound changes of representation.
In this paper we look at examples of representational transformations in
discrete mathematics, and show how we have used Isabelle's Transfer tool to
automate the use of these transformations in proofs. We give a brief overview
of a general theory of transformations that we consider appropriate for
thinking about the matter, and we explain how it relates to the Transfer
package. We show our progress towards developing a general tactic that
incorporates the automatic search for representation within the proving
process
Machine learning for inductive theorem proving
Over the past few years, machine learning has been successfully combined with automated
theorem provers (ATPs) to prove conjectures from various proof assistants.
However, such approaches do not usually focus on inductive proofs. In this work, we
explore a combination of machine learning, a simple Boyer-Moore model and ATPs as
a means of improving the automation of inductive proofs in HOL Light. We evaluate
the framework using a number of inductive proof corpora. In each case, our approach
achieves a higher success rate than running ATPs or the Boyer-Moore tool individually.
An attempt to add the support for non-recursive type to the Boyer-Moore waterfall
model is made by looking at proof automation for finite sets. We also test the framework
in a program verification setting by looking at proofs about sorting algorithms in
Hoare Logic
A global workspace framework for combined reasoning
Artificial Intelligence research has produced
many effective techniques for solving a wide range
of problems. Practitioners tend to concentrate their efforts in one particular problem solving
paradigm and, in the main, AI research describes new methods for solving particular types of
problems or improvements in existing approaches. By contrast, much less research has considered
how to fruitfully combine different problem solving techniques. Numerous studies have
demonstrated how a combination of reasoning approaches can improve the effectiveness of one of
those methods. Others have demonstrated how, by using several different reasoning techniques,
a system or method can be developed to accomplish a novel task, that none of the individual
techniques could perform. Combined reasoning systems, i.e., systems which apply disparate
reasoning techniques in concert, can be more than the sum of their parts. In addition, they
gain leverage from advances in the individual methods they encompass. However, the benefits
of combined reasoning systems are not easily accessible, and systems have been hand-crafted
to very specific tasks in certain domains. This approach means those systems often suffer from
a lack of clarity of design and are inflexible to extension. In order for the field of combined reasoning
to advance, we need to determine best practice and identify effective general approaches.
By developing useful frameworks, we can empower researchers to explore the potential of combined
reasoning, and AI in general. We present here a framework for developing combined
reasoning systems, based upon Baars’ Global Workspace Theory. The architecture describes a
collection of processes, embodying individual reasoning techniques, which communicate via a
global workspace. We present, also, a software toolkit which allows users to implement systems
according to the framework. We describe how, despite the restrictions of the framework, we
have used it to create systems to perform a number of combined reasoning tasks. As well
as being as effective as previous implementations, the simplicity of the underlying framework
means they are structured in a straightforward and comprehensible manner. It also makes the
systems easy to extend to new capabilities, which we demonstrate in a number of case studies.
Furthermore, the framework and toolkit we describe allow developers to harness the parallel
nature of the underlying theory by enabling them to readily convert their implementations into
distributed systems. We have experimented with the framework in a number of application domains
and, through these applications, we have contributed to constraint satisfaction problem
solving and automated theory formation
Aspects of the constructive omega rule within automated deduction
In general, cut elimination holds for arithmetical systems with the w -rule, but not for systems with ordinary induction. Hence in the latter, there is the problem of generalisation, since arbitrary formulae can be cut in. This makes automatic theorem -proving very difficult. An important technique for investigating derivability in formal systems of arithmetic has been to embed such systems into semi- formal systems with the w -rule. This thesis describes the implementation of such a system. Moreover, an important application is presented in the form of a new method of generalisation by means of "guiding proofs" in the stronger system, which sometimes succeeds in producing proofs in the original system when other methods fail
Automated Deduction – CADE 28
This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions
On the efficiency of meta-level inference
In this thesis we will be concerned with a particular type of architecture for reasoning
systems, known as meta-level architectures. After presenting the arguments for such
architectures (chapter 1), we discuss a number of systems in the literature that provide an
explicit meta-level architecture (chapter 2), and these systems are compared on the basis
of a number of distinguishing characteristics. This leads to a classification of meta-level
architectures (chapter 3). Within this classification we compare the different types of
architectures, and argue that one of these types, called bilingual meta-level inference
systems, has a number of advantages over the other types. We study the general structure
of bilingual meta-level inference architectures (chapter 4), and we discuss the details of a
system that we implemented which has this architecture (chapter 5). One of the problems
that this type of system suffers from is the overhead that is incurred by the meta-level
effort. We give a theoretical model of this problem, and we perform measurements which
show that this problem is indeed a significant one (chapter 6). Chapter 7 discusses partial
evaluation, the main technique available in the literature to reduce the meta-level
overhead. This technique, although useful, suffers from a number of serious problems. We
propose two further techniques, partial reflection and many-sorted logic (chapters 8 and
9), which can be used to reduce the problem of meta-level overhead without suffering from
these problems
Proceedings of the Workshop on Change of Representation and Problem Reformulation
The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning
Применение метода абстракций для поиска логического вывода в системах искусственного интеллекта
A method of logical inference search with preliminary adjustment to a concrete knowledge base is considered. For the adjustment an abstraction and formal- grammar interpretation of deduction problem is used. The method allows to improve the efficiency of logical inference search.Рассматривается метод поиска логического вывода с предварительной настройкой на конкретные базы знаний. Для настройки используется абстракция и формально-грамматическая интерпретация проблемы дедукции. Метод позволяет улучшить эффективность поиска логического вывода
Recommended from our members
Concepts and analogies in cybernetics: Mathematical investigations of the role of analogy in concept formation and problem solving; with emphasis for conflict resolution via object and morphism eliminations
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.We address two problematic areas of cybernetics; nam. Analogical Problem Solving (APS) and Analogical Learning (AL). Both these human faculties do unquestionably require Intelligence. In addition, we point out that shifting of representations is the main unified theme underlying these two intellectual tasks. We focus our attention on the formulation and clarification of the notion of analogy, which has been loosely treated and used in the literature; and also on its role in shifting of representations.
We describe analogizing situations in a new representational scheme, borrowed from mathematics and modified and extended to cater for our targets. We call it k-structure, closely resembling semantic networks and directed graphs; the main components of it are the so-called objects and morphisms. We argue and substantiate the need for such a representation scheme, by analysing what its constituents stand for and by cataloguing its virtues, the main one being its visual appeal and its mathematical clarity, and by listing its disadvantages when it is compared to other representation systems. Emphasis is also given to its descriptive power and usefulness by implementing it in a number of APS and AL situations. Besides representation issues, attention is paid to intelligence mechanisms which are involved in APS and AL. A cornerstone in APS and a fundamental theme in AL is the 'skeletization of k-structures'. APS is conceived as 'harmonization of skeletons'. The methodology we develop involves techniques which are computer implemented and extensively studied in theoretic terms via a proposed theory for extended k-structures. To name but a few: 1. 'the separation of the context of a concept from the concept itself', based on the ideas of k-opens and k-spaces; 2, 'object and morphism elimination' of a controversial nature; and 3. 'conflict or deadlock or dilemma resolution' which naturally arises in a k-structure interaction. The overall system, is then applied to capture the essence of EVANS' (1963) analogy-type problems and WINSTOM (1970) learning-type situations. In our attempt not to be too informal, we use basic notions and terminology from abstract Algebra, Topology and Category theory. We rather tend to be "non-logical" (analogical) in EVANS' and WINSTON's sense; "non-numeric", in MESAROVIC (1970) terms (we rather deal with abstract conceptual entities); "non-linguistic" (we do not touch natural language); and "non-resolution" oriented, in the sense of BLEDSOE (1977). However, we give hints sometimes about logical deductive axiomatic systems, employing First Order Predicate Calculus (FOPC); and about semiotics, by which we denote syntactic-semantic-pragmatic features of our system and issues of the problem domains it is acting upon. We believe in what we call: shift from the traditional 'Heuristic search paradigm' era to the 'Analogy-paradigm' era underlying Artificial Intelligence and Cybernetics. We justify this merely by listing a number of A. I. works, which employ, in some way or another, the concept of analogy, over the last fifteen years or so, where a noticeable peak is obvious during the last years and especially in 1977. Finally, we hope that if the proposed conceptual framework and techniques developed do not straightforwardly constitute some kind of platform for Artificial Intelligence, at least it would give some insights into and illuminate our understanding of the two most fundamental faculties the human brain is occupied with; namely problem solving and learning
- …