73 research outputs found
Recommended from our members
An approach to supervised learning of three valued Lukasiewicz logic in Hölldobler's core method
The core method [6] provides a way of translating logic programs into a multilayer perceptron computing least models of the programs. In [7] , a variant of the core method for three valued Lukasiewicz logic and its applicability to cognitive modelling were introduced. Building on these results, the present paper provides a modified core suitable for supervised learning, implements and executes supervised learning with the backpropagation algorithm and, finally, constructs a rule extraction method in order to close the neural-symbolic cycle
Recommended from our members
Learning Lukasiewicz logic
The integration between connectionist learning and logic-based reasoning is a longstanding foundational question in artificial intelligence, cognitive systems, and computer science in general. Research into neural-symbolic integration aims to tackle this challenge, developing approaches bridging the gap between sub-symbolic and symbolic representation and computation. In this line of work the core method has been suggested as a way of translating logic programs into a multilayer perceptron computing least models of the programs. In particular, a variant of the core method for three valued Ćukasiewicz logic has proven to be applicable to cognitive modelling among others in the context of Byrneâs suppression task. Building on the underlying formal results and the corresponding computational framework, the present article provides a modified core method suitable for the supervised learning of Ćukasiewicz logic (and of a closely-related variant thereof), implements and executes the corresponding supervised learning with the backpropagation algorithm and, finally, constructs a rule extraction method in order to close the neural-symbolic cycle. The resulting system is then evaluated in several empirical test cases, and recommendations for future developments are derived
Recommended from our members
A note on tractability and artificial intelligence
The recognition that human minds/brains are finite systems with limited resources for computation has led researchers in Cognitive Science to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. As also artificial intelligence (AI) in its attempt to recreate intelligence and capacities inspired by the human mind is dealing with finite systems, transferring the Tractable Cognition thesis into this new context and adapting it accordingly may give rise to insights and ideas that can help in progressing towards meeting the goals of the AI endeavor
Recommended from our members
Applying AI for modeling and understanding analogy-based classroom teaching tools and techniques
This paper forms the final part of a short series of related articles[1,2] dedicated to highlighting a fruitful type of application of cognitively-inspired analogy engines in an educational context. It complements the earlier work with an additional fully worked out example by providing a short analysis and a detailed formal model (based on the Heuristic-Driven Theory Projection computational analogy framework) of the Number Highrise, a tool for teaching multiplication-based relations in the range of natural numbers up to 100 to children in their first years of primary school
Recommended from our members
Towards integrated neural-symbolic systems for human-level AI: Two research programs helping to bridge the gaps
After a human-level AI-oriented overview of the status quo in neural-symbolic integration, two research programs aiming at overcoming long-standing challenges in the field are suggested to the community: The first program targets a better understanding of foundational differences and relationships on the level of computational complexity between symbolic and subsymbolic computation and representation, potentially providing explanations for the empirical differences between the paradigms in application scenarios and a foothold for subsequent attempts at overcoming these. The second program suggests a new approach and computational architecture for the cognitively-inspired anchoring of an agent's learning, knowledge formation, and higher reasoning abilities in real-world interactions through a closed neural-symbolic acting/sensing-processing-reasoning cycle, potentially providing new foundations for future agent architectures, multi-agent systems, robotics, and cognitive systems and facilitating a deeper understanding of the development and interaction in human-technological settings
What Does Explainable AI Really Mean? A New Conceptualization of Perspectives
We characterize three notions of explainable AI that cut across research fields: opaque systems that offer no insight into its algo- rithmic mechanisms; interpretable systems where users can mathemat- ically analyze its algorithmic mechanisms; and comprehensible systems that emit symbols enabling user-driven explanations of how a conclusion is reached. The paper is motivated by a corpus analysis of NIPS, ACL, COGSCI, and ICCV/ECCV paper titles showing differences in how work on explainable AI is positioned in various fields. We close by introducing a fourth notion: truly explainable systems, where automated reasoning is central to output crafted explanations without requiring human post processing as final step of the generative process
Recommended from our members
Towards a computational- and algorithmic-level account of concept blending using analogies and amalgams
Concept blendingâa cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combinationâis taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks
Explainability in Artificial Intelligence has been revived as a topic of active research by the need of conveying safety and trust to users in the `how' and `why' of automated decision-making. Whilst a plethora of approaches have been developed for post-hoc explainability, only a few focus on how to use domain knowledge, and how this influences the understandability of global explanations from the users' perspective. In this paper, we show how ontologies help the understandability of global post-hoc explanations, presented in the form of symbolic models. In particular, we build on Trepan, an algorithm that explains artificial neural networks by means of decision trees, and we extend it to include ontologies modeling domain knowledge in the process of generating explanations. We present the results of a user study that measures the understandability of decision trees using a syntactic complexity measure, and through time and accuracy of responses as well as reported user confidence and understandability. The user study considers domains where explanations are critical, namely, in finance and medicine. The results show that decision trees generated with our algorithm, taking into account domain knowledge, are more understandable than those generated by standard Trepan without the use of ontologies
Reasoning in non-probabilistic uncertainty: logic programming and neural-symbolic computing as examples
This article aims to achieve two goals: to show that probability is not the only way of dealing with uncertainty (and even more, that there are kinds of uncertainty which are for principled reasons not addressable with probabilistic means); and to provide evidence that logic-based methods can well support reasoning with uncertainty. For the latter claim, two paradigmatic examples are presented: Logic Programming with Kleene semantics for modelling reasoning from information in a discourse, to an interpretation of the state of affairs of the intended model, and a neural-symbolic implementation of Input/Output logic for dealing with uncertainty in dynamic normative context
- âŠ