3,143 research outputs found
A Decidable Confluence Test for Cognitive Models in ACT-R
Computational cognitive modeling investigates human cognition by building
detailed computational models for cognitive processes. Adaptive Control of
Thought - Rational (ACT-R) is a rule-based cognitive architecture that offers a
widely employed framework to build such models. There is a sound and complete
embedding of ACT-R in Constraint Handling Rules (CHR). Therefore analysis
techniques from CHR can be used to reason about computational properties of
ACT-R models. For example, confluence is the property that a program yields the
same result for the same input regardless of the rules that are applied.
In ACT-R models, there are often cognitive processes that should always yield
the same result while others e.g. implement strategies to solve a problem that
could yield different results. In this paper, a decidable confluence criterion
for ACT-R is presented. It allows to identify ACT-R rules that are not
confluent. Thereby, the modeler can check if his model has the desired
behavior.
The sound and complete translation of ACT-R to CHR from prior work is used to
come up with a suitable invariant-based confluence criterion from the CHR
literature. Proper invariants for translated ACT-R models are identified and
proven to be decidable. The presented method coincides with confluence of the
original ACT-R models.Comment: To appear in Stefania Costantini, Enrico Franconi, William Van
Woensel, Roman Kontchakov, Fariba Sadri, and Dumitru Roman: "Proceedings of
RuleML+RR 2017". Springer LNC
A Graphical Language for Proof Strategies
Complex automated proof strategies are often difficult to extract, visualise,
modify, and debug. Traditional tactic languages, often based on stack-based
goal propagation, make it easy to write proofs that obscure the flow of goals
between tactics and are fragile to minor changes in input, proof structure or
changes to tactics themselves. Here, we address this by introducing a graphical
language called PSGraph for writing proof strategies. Strategies are
constructed visually by "wiring together" collections of tactics and evaluated
by propagating goal nodes through the diagram via graph rewriting. Tactic nodes
can have many output wires, and use a filtering procedure based on goal-types
(predicates describing the features of a goal) to decide where best to send
newly-generated sub-goals.
In addition to making the flow of goal information explicit, the graphical
language can fulfil the role of many tacticals using visual idioms like
branching, merging, and feedback loops. We argue that this language enables
development of more robust proof strategies and provide several examples, along
with a prototype implementation in Isabelle
The role of computational logic as a hinge paradigm among deduction, problem solving, programming, and parallelism
This paper presents some brief considerations on the role of Computational Logic in the construction of Artificial Intelligence systems and in programming in general. It does not address how the many problems in AI can be solved but, rather more modestly, tries to point out some advantages of Computational Logic as a tool for the AI scientist in his quest. It addresses the interaction between declarative and procedural views of programs (deduction and action), the impact of the intrinsic limitations of logic, the relationship with other apparently competing computational paradigms, and finally discusses implementation-related issues, such as the efficiency of current implementations
and their capability for efficiently exploiting existing and future sequential and parallel hardware. The purpose of the discussion is in no way to present Computational Logic as the unique overall vehicle for the development of intelligent systems (in the firm belief that such a panacea is yet to be found) but rather to stress its strengths in providing reasonable solutions to several aspects of the task
Cognitive architectures as Lakatosian research programmes: two case studies
Cognitive architectures - task-general theories of the structure and function of the complete cognitive system - are sometimes argued to be more akin to frameworks or belief systems than scientific theories. The argument stems from the apparent non-falsifiability of existing cognitive architectures. Newell was aware of this criticism and argued that architectures should be viewed not as theories subject to Popperian falsification, but rather as Lakatosian research programs based on cumulative growth. Newell's argument is undermined because he failed to demonstrate that the development of Soar, his own candidate architecture, adhered to Lakatosian principles. This paper presents detailed case studies of the development of two cognitive architectures, Soar and ACT-R, from a Lakatosian perspective. It is demonstrated that both are broadly Lakatosian, but that in both cases there have been theoretical progressions that, according to Lakatosian criteria, are pseudo-scientific. Thus, Newell's defense of Soar as a scientific rather than pseudo-scientific theory is not supported in practice. The ACT series of architectures has fewer pseudo-scientific progressions than Soar, but it too is vulnerable to accusations of pseudo-science. From this analysis, it is argued that successive versions of theories of the human cognitive architecture must explicitly address five questions to maintain scientific credibility
Capturing Hiproofs in HOL Light
Hierarchical proof trees (hiproofs for short) add structure to ordinary proof
trees, by allowing portions of trees to be hierarchically nested. The
additional structure can be used to abstract away from details, or to label
particular portions to explain their purpose. In this paper we present two
complementary methods for capturing hiproofs in HOL Light, along with a tool to
produce web-based visualisations. The first method uses tactic recording, by
modifying tactics to record their arguments and construct a hierarchical tree;
this allows a tactic proof script to be modified. The second method uses proof
recording, which extends the HOL Light kernel to record hierachical proof trees
alongside theorems. This method is less invasive, but requires care to manage
the size of the recorded objects. We have implemented both methods, resulting
in two systems: Tactician and HipCam
A Synthesis of the Procedural and Declarative Styles of Interactive Theorem Proving
We propose a synthesis of the two proof styles of interactive theorem
proving: the procedural style (where proofs are scripts of commands, like in
Coq) and the declarative style (where proofs are texts in a controlled natural
language, like in Isabelle/Isar). Our approach combines the advantages of the
declarative style - the possibility to write formal proofs like normal
mathematical text - and the procedural style - strong automation and help with
shaping the proofs, including determining the statements of intermediate steps.
Our approach is new, and differs significantly from the ways in which the
procedural and declarative proof styles have been combined before in the
Isabelle, Ssreflect and Matita systems. Our approach is generic and can be
implemented on top of any procedural interactive theorem prover, regardless of
its architecture and logical foundations. To show the viability of our proposed
approach, we fully implemented it as a proof interface called miz3, on top of
the HOL Light interactive theorem prover. The declarative language that this
interface uses is a slight variant of the language of the Mizar system, and can
be used for any interactive theorem prover regardless of its logical
foundations. The miz3 interface allows easy access to the full set of tactics
and formal libraries of HOL Light, and as such has "industrial strength". Our
approach gives a way to automatically convert any procedural proof to a
declarative counterpart, where the converted proof is similar in size to the
original. As all declarative systems have essentially the same proof language,
this gives a straightforward way to port proofs between interactive theorem
provers
- …