63,681 research outputs found
A Description Logic of Typicality for Conceptual Combination
We propose a nonmonotonic Description Logic of typicality able to
account for the phenomenon of combining prototypical concepts, an open problem
in the fields of AI and cognitive modelling. Our logic extends the logic of
typicality ALC + TR, based on the notion of rational closure, by inclusions
p :: T(C) v D (âwe have probability p that typical Cs are Dsâ), coming
from the distributed semantics of probabilistic Description Logics. Additionally,
it embeds a set of cognitive heuristics for concept combination. We show that the
complexity of reasoning in our logic is EXPTIME-complete as in ALC
Dagstuhl Seminar Proceedings 10302 Learning paradigms in dynamic environments
Abstract We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty. Overview The study of human behaviour is an important part of computer science, artificial intelligence (AI), neural computation, cognitive science, philosophy, psychology and other areas. Among the most prominent tools in the modelling of behaviour are computational-logic systems (classical logic, nonmonotonic logic, modal and temporal logic) and connectionist models of cognition (feedforward and recurrent networks, symmetric and deep networks, self-organising networks). Recent studies in cognitive science, artificial intelligence and evolutionary psychology have produced a number of cognitive models of reasoning, learning and language that are underpinned by computatio
Accessible reasoning with diagrams: From cognition to automation
High-tech systems are ubiquitous and often safety and se- curity critical: reasoning about their correctness is paramount. Thus, precise modelling and formal reasoning are necessary in order to convey knowledge unambiguously and accurately. Whilst mathematical mod- elling adds great rigour, it is opaque to many stakeholders which leads to errors in data handling, delays in product release, for example. This is a major motivation for the development of diagrammatic approaches to formalisation and reasoning about models of knowledge. In this paper, we present an interactive theorem prover, called iCon, for a highly expressive diagrammatic logic that is capable of modelling OWL 2 ontologies and, thus, has practical relevance. Significantly, this work is the first to design diagrammatic inference rules using insights into what humans find accessible. Specifically, we conducted an experiment about relative cognitive benefits of primitive (small step) and derived (big step) inferences, and use the results to guide the implementation of inference rules in iCon
On Logical Characterisation of Human Concept Learning based on Terminological Systems
The central focus of this article is the epistemological assumption that knowledge could be generated based on human beingsâ experiences and over their conceptions of the world. Logical characterisation of human inductive learning over their produced conceptions within terminological systems and providing a logical background for theorising over the Human Concept Learning Problem (HCLP) in terminological systems are the main contributions of this research. In order to make a linkage between âLogicâ and âCognitionâ, Description Logics (DLs) will be employed to provide a logical description and analysis of actual human inductive reasoning (and learning). This research connects with the topics âlogic & learningâ, âcognitive modellingâ, and âterminological knowledge representationâ
Recommended from our members
Learning Lukasiewicz logic
The integration between connectionist learning and logic-based reasoning is a longstanding foundational question in artificial intelligence, cognitive systems, and computer science in general. Research into neural-symbolic integration aims to tackle this challenge, developing approaches bridging the gap between sub-symbolic and symbolic representation and computation. In this line of work the core method has been suggested as a way of translating logic programs into a multilayer perceptron computing least models of the programs. In particular, a variant of the core method for three valued Ćukasiewicz logic has proven to be applicable to cognitive modelling among others in the context of Byrneâs suppression task. Building on the underlying formal results and the corresponding computational framework, the present article provides a modified core method suitable for the supervised learning of Ćukasiewicz logic (and of a closely-related variant thereof), implements and executes the corresponding supervised learning with the backpropagation algorithm and, finally, constructs a rule extraction method in order to close the neural-symbolic cycle. The resulting system is then evaluated in several empirical test cases, and recommendations for future developments are derived
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
Recommended from our members
A neural cognitive model of argumentation with application to legal inference and decision making
Formal models of argumentation have been investigated in several areas, from multi-agent systems and artificial intelligence (AI) to decision making, philosophy and law. In artificial intelligence, logic-based models have been the standard for the representation of argumentative reasoning. More recently, the standard logic-based models have been shown equivalent to standard connectionist models. This has created a new line of research where (i) neural networks can be used as a parallel computational model for argumentation and (ii) neural networks can be used to combine argumentation, quantitative reasoning and statistical learning. At the same time, non-standard logic models of argumentation started to emerge. In this paper, we propose a connectionist cognitive model of argumentation that accounts for both standard and non-standard forms of argumentation. The model is shown to be an adequate framework for dealing with standard and non-standard argumentation, including joint-attacks, argument support, ordered attacks, disjunctive attacks, meta-level attacks, self-defeating attacks, argument accrual and uncertainty. We show that the neural cognitive approach offers an adequate way of modelling all of these different aspects of argumentation. We have applied the framework to the modelling of a public prosecution charging decision as part of a real legal decision making case study containing many of the above aspects of argumentation. The results show that the model can be a useful tool in the analysis of legal decision making, including the analysis of what-if questions and the analysis of alternative conclusions. The approach opens up two new perspectives in the short-term: the use of neural networks for computing prevailing arguments efficiently through the propagation in parallel of neuronal activations, and the use of the same networks to evolve the structure of the argumentation network through learning (e.g. to learn the strength of arguments from data)
- âŠ