194,510 research outputs found

    Neurons and Symbols: A Manifesto

    Get PDF
    We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty

    Modelling individual variability in cognitive development

    Get PDF
    Investigating variability in reasoning tasks can provide insights into key issues in the study of cognitive development. These include the mechanisms that underlie developmental transitions, and the distinction between individual differences and developmental disorders. We explored the mechanistic basis of variability in two connectionist models of cognitive development, a model of the Piagetian balance scale task (McClelland, 1989) and a model of the Piagetian conservation task (Shultz, 1998). For the balance scale task, we began with a simple feed-forward connectionist model and training patterns based on McClelland (1989). We investigated computational parameters, problem encodings, and training environments that contributed to variability in development, both across groups and within individuals. We report on the parameters that affect the complexity of reasoning and the nature of ‘rule’ transitions exhibited by networks learning to reason about balance scale problems. For the conservation task, we took the task structure and problem encoding of Shultz (1998) as our base model. We examined the computational parameters, problem encodings, and training environments that contributed to variability in development, in particular examining the parameters that affected the emergence of abstraction. We relate the findings to existing cognitive theories on the causes of individual differences in development

    Individual differences in relational learning and analogical reasoning:A computational model of longitudinal change

    Get PDF
    Children’s cognitive control and knowledge at school entry predict growth rates in analogical reasoning skill over time; however, the mechanisms by which these factors interact and impact learning are unclear. We propose that inhibitory control (IC) is critical for developing both the relational representations necessary to reason and the ability to use these representations in complex problem solving. We evaluate this hypothesis using computational simulations in a model of analogical thinking, Discovery of Relations by Analogy/Learning and Inference with Schemas and Analogy (DORA/LISA; Doumas et al., 2008). Longitudinal data from children who solved geometric analogy problems repeatedly over 6 months show three distinct learning trajectories though all gained somewhat: analogical reasoners throughout, non-analogical reasoners throughout, and transitional – those who start non-analogical and grew to be analogical. Varying the base level of top-down lateral inhibition in DORA affected the model’s ability to learn relational representations, which, in conjunction with inhibition levels used in LISA during reasoning, simulated accuracy rates and error types seen in the three different learning trajectories. These simulations suggest that IC may not only impact reasoning ability but may also shape the ability to acquire relational knowledge given reasoning opportunities

    Adapting to an uncertain world: Cognitive capacity and causal reasoning with ambiguous observations

    Get PDF
    Ambiguous causal evidence in which the covariance of the cause and effect is partially known is pervasive in real life situations. Little is known about how people reason about causal associations with ambiguous information and the underlying cognitive mechanisms. This paper presents three experiments exploring the cognitive mechanisms of causal reasoning with ambiguous observations. Results revealed that the influence of ambiguous observations manifested by missing information on causal reasoning depended on the availability of cognitive resources, suggesting that processing ambiguous information may involve deliberative cognitive processes. Experiment 1 demonstrated that subjects did not ignore the ambiguous observations in causal reasoning. They also had a general tendency to treat the ambiguous observations as negative evidence against the causal association. Experiment 2 and Experiment 3 included a causal learning task requiring a high cognitive demand in which paired stimuli were presented to subjects sequentially. Both experiments revealed that processing ambiguous or missing observations can depend on the availability of cognitive resources. Experiment 2 suggested that the contribution of working memory capacity to the comprehensiveness of evidence retention was reduced when there were ambiguous or missing observations. Experiment 3 demonstrated that an increase in cognitive demand due to a change in the task format reduced subjects' tendency to treat ambiguous-missing observations as negative cues. Copyright: © 2015 Shou, Smithson.This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning

    Get PDF
    Current advances in Artificial Intelligence and machine learning in general, and deep learning in particular have reached unprecedented impact not only across research communities, but also over popular media channels. However, concerns about interpretability and accountability of AI have been raised by influential thinkers. In spite of the recent impact of AI, several works have identified the need for principled knowledge representation and reasoning mechanisms integrated with deep learning-based systems to provide sound and explainable models for such systems. Neural-symbolic computing aims at integrating, as foreseen by Valiant, two most fundamental cognitive abilities: the ability to learn from the environment, and the ability to reason from what has been learned. Neural-symbolic computing has been an active topic of research for many years, reconciling the advantages of robust learning in neural networks and reasoning and interpretability of symbolic representation. In this paper, we survey recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning. We illustrate the effectiveness of the approach by outlining the main characteristics of the methodology: principled integration of neural learning with symbolic knowledge representation and reasoning allowing for the construction of explainable AI systems. The insights provided by neural-symbolic computing shed new light on the increasingly prominent need for interpretable and accountable AI systems

    Dagstuhl Seminar Proceedings 10302 Learning paradigms in dynamic environments

    Get PDF
    Abstract We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty. Overview The study of human behaviour is an important part of computer science, artificial intelligence (AI), neural computation, cognitive science, philosophy, psychology and other areas. Among the most prominent tools in the modelling of behaviour are computational-logic systems (classical logic, nonmonotonic logic, modal and temporal logic) and connectionist models of cognition (feedforward and recurrent networks, symmetric and deep networks, self-organising networks). Recent studies in cognitive science, artificial intelligence and evolutionary psychology have produced a number of cognitive models of reasoning, learning and language that are underpinned by computatio

    Modelling the Developing Mind: From Structure to Change

    Get PDF
    This paper presents a theory of cognitive change. The theory assumes that the fundamental causes of cognitive change reside in the architecture of mind. Thus, the architecture of mind as specified by the theory is described first. It is assumed that the mind is a three-level universe involving (1) a processing system that constrains processing potentials, (2) a set of specialized capacity systems that guide understanding of different reality and knowledge domains, and (3) a hypecognitive system that monitors and controls the functioning of all other systems. The paper then specifies the types of change that may occur in cognitive development (changes within the levels of mind, changes in the relations between structures across levels, changes in the efficiency of a structure) and a series of general (e.g., metarepresentation) and more specific mechanisms (e.g., bridging, interweaving, and fusion) that bring the changes about. It is argued that different types of change require different mechanisms. Finally, a general model of the nature of cognitive development is offered. The relations between the theory proposed in the paper and other theories and research in cognitive development and cognitive neuroscience is discussed throughout the paper
    • …
    corecore