50,748 research outputs found

    The propositional nature of human associative learning

    Get PDF
    The past 50 years have seen an accumulation of evidence suggesting that associative learning depends oil high-level cognitive processes that give rise to propositional knowledge. Yet, many learning theorists maintain a belief in a learning mechanism in which links between mental representations are formed automatically. We characterize and highlight the differences between the propositional and link approaches, and review the relevant empirical evidence. We conclude that learning is the consequence of propositional reasoning processes that cooperate with the unconscious processes involved in memory retrieval and perception. We argue that this new conceptual framework allows many of the important recent advances in associative learning research to be retained, but recast in a model that provides a firmer foundation for both immediate application and future research

    Creativity and the Brain

    Get PDF
    Neurocognitive approach to higher cognitive functions that bridges the gap between psychological and neural level of description is introduced. Relevant facts about the brain, working memory and representation of symbols in the brain are summarized. Putative brain processes responsible for problem solving, intuition, skill learning and automatization are described. The role of non-dominant brain hemisphere in solving problems requiring insight is conjectured. Two factors seem to be essential for creativity: imagination constrained by experience, and filtering that selects most interesting solutions. Experiments with paired words association are analyzed in details and evidence for stochastic resonance effects is found. Brain activity in the process of invention of novel words is proposed as the simplest way to understand creativity using experimental and computational means. Perspectives on computational models of creativity are discussed

    A Cognitive Science Based Machine Learning Architecture

    Get PDF
    In an attempt to illustrate the application of cognitive science principles to hard AI problems in machine learning we propose the LIDA technology, a cognitive science based architecture capable of more human-like learning. A LIDA based software agent or cognitive robot will be capable of three fundamental, continuously active, humanlike learning mechanisms:\ud 1) perceptual learning, the learning of new objects, categories, relations, etc.,\ud 2) episodic learning of events, the what, where, and when,\ud 3) procedural learning, the learning of new actions and action sequences with which to accomplish new tasks. The paper argues for the use of modular components, each specializing in implementing individual facets of human and animal cognition, as a viable approach towards achieving general intelligence

    Neurocognitive Informatics Manifesto.

    Get PDF
    Informatics studies all aspects of the structure of natural and artificial information systems. Theoretical and abstract approaches to information have made great advances, but human information processing is still unmatched in many areas, including information management, representation and understanding. Neurocognitive informatics is a new, emerging field that should help to improve the matching of artificial and natural systems, and inspire better computational algorithms to solve problems that are still beyond the reach of machines. In this position paper examples of neurocognitive inspirations and promising directions in this area are given

    How conscious experience and working memory interact

    Get PDF
    Active components of classical working memory are conscious, but traditional theory does not account for this fact. Global Workspace theory suggests that consciousness is needed to recruit unconscious specialized networks that carry out detailed working memory functions. The IDA model provides a fine-grained analysis of this process, specifically of two classical workingmemory tasks, verbal rehearsal and the utilization of a visual image. In the process, new light is shed on the interactions between conscious and unconscious\ud aspects of working memory

    Neural Models of Normal and Abnormal Behavior: What Do Schizophrenia, Parkinsonism, Attention Deficit Disorder, and Depression Have in Common?

    Full text link
    Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333

    Symbols are not uniquely human

    Get PDF
    Modern semiotics is a branch of logics that formally defines symbol-based communication. In recent years, the semiotic classification of signs has been invoked to support the notion that symbols are uniquely human. Here we show that alarm-calls such as those used by African vervet monkeys (Cercopithecus aethiops), logically satisfy the semiotic definition of symbol. We also show that the acquisition of vocal symbols in vervet monkeys can be successfully simulated by a computer program based on minimal semiotic and neurobiological constraints. The simulations indicate that learning depends on the tutor-predator ratio, and that apprentice-generated auditory mistakes in vocal symbol interpretation have little effect on the learning rates of apprentices (up to 80% of mistakes are tolerated). In contrast, just 10% of apprentice-generated visual mistakes in predator identification will prevent any vocal symbol to be correctly associated with a predator call in a stable manner. Tutor unreliability was also deleterious to vocal symbol learning: a mere 5% of “lying” tutors were able to completely disrupt symbol learning, invariably leading to the acquisition of incorrect associations by apprentices. Our investigation corroborates the existence of vocal symbols in a non-human species, and indicates that symbolic competence emerges spontaneously from classical associative learning mechanisms when the conditioned stimuli are self-generated, arbitrary and socially efficacious. We propose that more exclusive properties of human language, such as syntax, may derive from the evolution of higher-order domains for neural association, more removed from both the sensory input and the motor output, able to support the gradual complexification of grammatical categories into syntax

    A Morphological Associative Memory Employing A Stored Pattern Independent Kernel Image and Its Hardware Model

    Get PDF
    An associative memory provides a convenient way for pattern retrieval and restoration, which has an important role for handling data distorted with noise. As an effective associative memory, we paid attention to a morphological associative memory (MAM) proposed by Ritter. The model is superior to ordinary associative memory models in terms of calculation amount, memory capacity, and perfect recall rate. However, in general, the kernel design becomes difficult as the stored pattern increases because the kernel uses a part of each stored pattern. In this paper, we propose a stored pattern independent kernel design method for the MAM and design the MAM employing the proposed kernel design with a standard digital manner in parallel architecture for acceleration. We confirm the validity of the proposed kernel design method by auto- and hetero-association experiments and investigate the efficiency of the hardware acceleration. A high-speed operation (more than 150 times in comparison with software execution) is achieved in the custom hardware. The proposed model works as an intelligent pre-processor for the Brain-Inspired Systems (Brain-IS) working in real world

    Neural Distributed Autoassociative Memories: A Survey

    Full text link
    Introduction. Neural network models of autoassociative, distributed memory allow storage and retrieval of many items (vectors) where the number of stored items can exceed the vector dimension (the number of neurons in the network). This opens the possibility of a sublinear time search (in the number of stored items) for approximate nearest neighbors among vectors of high dimension. The purpose of this paper is to review models of autoassociative, distributed memory that can be naturally implemented by neural networks (mainly with local learning rules and iterative dynamics based on information locally available to neurons). Scope. The survey is focused mainly on the networks of Hopfield, Willshaw and Potts, that have connections between pairs of neurons and operate on sparse binary vectors. We discuss not only autoassociative memory, but also the generalization properties of these networks. We also consider neural networks with higher-order connections and networks with a bipartite graph structure for non-binary data with linear constraints. Conclusions. In conclusion we discuss the relations to similarity search, advantages and drawbacks of these techniques, and topics for further research. An interesting and still not completely resolved question is whether neural autoassociative memories can search for approximate nearest neighbors faster than other index structures for similarity search, in particular for the case of very high dimensional vectors.Comment: 31 page
    corecore