490 research outputs found
Transforming floundering into success
We show how logic programs with "delays" can be transformed to programs
without delays in a way which preserves information concerning floundering
(also known as deadlock). This allows a declarative (model-theoretic),
bottom-up or goal independent approach to be used for analysis and debugging of
properties related to floundering. We rely on some previously introduced
restrictions on delay primitives and a key observation which allows properties
such as groundness to be analysed by approximating the (ground) success set.
This paper is to appear in Theory and Practice of Logic Programming (TPLP).
Keywords: Floundering, delays, coroutining, program analysis, abstract
interpretation, program transformation, declarative debuggingComment: Number of pages: 24 Number of figures: 9 Number of tables: non
Recommended from our members
Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams
Concept blending–a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination–is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies
Detecting Prolog programming techniques using abstract interpretation
There have been a number of attempts at developing intelligent tutoring systems (ITSs)
for teaching students various programming languages. An important component of such
an ITS is a debugger capable of recognizing errors in the code the student writes and
possibly suggesting ways of correcting such errors. The debugging process involves a
wealth of knowledge about the programming language, the student and the individual
problem at hand, and an automated debugging component makes use of a number of
tools which apply this knowledge. Successive ITSs have incorporated a wider range of
knowledge and more powerful tools.
The research described in this thesis should be seen as carrying on with this succes¬
sion. Specifically, we attempt to enhance an existing Prolog ITS (PITS) debugger called
APR0P0S2 developed by Looi. The enhancements take the form of a richer language
with which to describe Prolog code and more powerful tools with which constructs in
this language may be detected in Prolog code.
The richer language is based on the notion of programming techniques—common
patterns in code which capture in some sense an expert's understanding of Prolog.
The tools are based on Prolog abstract interpretation—a program analysis method for
inferring dynamic properties of code. Our research makes contributions to both these
areas.
We develop a language for describing classes of Prolog programming techniques
that manipulate data-structures. We define classes in this language for common Prolog
techniques such as accumulator pairs and difference structures.
We use abstract interpretation to infer the dynamic features with which techniques
are described. We develop a general framework for abstract interpretation which is
described in Prolog, so leading directly to an implementation. We develop two abstract
domains—one which infers general data flow information about the code and one which
infers particularly detailed type information—and describe the implementation of the
former
Global parallel unification for large question-answering systems
An efficient means of storing data in a first-order predicate calculus theorem-proving system is described. The data structure is oriented for large scale question-answering (QA) systems. An algorithm is outlined which uses the data structure to unify a given literal in parallel against all literals in all clauses in the data base. The data structure permits a compact representation of data within a QA system. Some suggestions are made for heuristics which can be used to speed-up the unification algorithm in systems
Checking Computations of Formal Method Tools - A Secondary Toolchain for ProB
We present the implementation of pyB, a predicate - and expression - checker
for the B language. The tool is to be used for a secondary tool chain for data
validation and data generation, with ProB being used in the primary tool chain.
Indeed, pyB is an independent cleanroom-implementation which is used to
double-check solutions generated by ProB, an animator and model-checker for B
specifications. One of the major goals is to use ProB together with pyB to
generate reliable outputs for high-integrity safety critical applications.
Although pyB is still work in progress, the ProB/pyB toolchain has already been
successfully tested on various industrial B machines and data validation tasks.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
Analogy, Amalgams, and Concept Blending
Concept blending — a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination — is taken as a key element of creative thought and combinatorial creativity. In this paper, we provide an intermediate report on work towards the development of a computational-level and algorithmic-level account of concept blending. We present the theoretical background as well as an algorithmic proposal combining techniques from computational analogy-making and case-based reasoning, and exemplify the feasibility of the approach in two case studies.. © 2015 Cognitive Systems Foundation.The authors acknowledge the financial support of the Future and Emerging Technologies programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 611553 (COINVENT)Peer Reviewe
Expectations for Associative-Commutative Unification Speedups in a Multicomputer Environment
An essential element of automated deduction systems is unification algorithms which identify general substitutions and when applied to two expressions, make them identical. However, functions which are associative and commutative, such as the usual addition and multiplication functions, often arise in term rewriting systems, program verification, the theory of abstract data types and logic programming. The introduction to the associative and commutative equality axioms together with standard unification brings with it problems of termination and unreasonably large search spaces. One way around these problems is to remove the troublesome axioms from the system and to employ a unification algorithm which unifies modulo the axioms of associativity and commutativity. Unlike standard unification, the associative-commutative (AC) unification of two expressions can lead to the formation of many most general unifiers. A report is presented on a hybrid AC unification algorithm which has been implemented to run in parallel on an Intel iPSC/
- …