169,564 research outputs found

    The Narrow Conception of Computational Psychology

    Get PDF
    One particularly successful approach to modeling within cognitive science is computational psychology. Computational psychology explores psychological processes by building and testing computational models with human data. In this paper, it is argued that a specific approach to understanding computation, what is called the ‘narrow conception’, has problematically limited the kinds of models, theories, and explanations that are offered within computational psychology. After raising two problems for the narrow conception, an alternative, ‘wide approach’ to computational psychology is proposed

    A role for the developing lexicon in phonetic category acquisition

    Get PDF
    Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning

    Cortical Learning of Recognition Categories: A Resolution of the Exemplar Vs. Prototype Debate

    Full text link
    Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.Defense Advanced Projects Research Agency SyNaPSE program (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC #801881-BS under HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (NSF SBE-0354378

    Stochastic accumulation of feature information in perception and memory

    Get PDF
    It is now well established that the time course of perceptual processing influences the first second or so of performance in a wide variety of cognitive tasks. Over the last20 years, there has been a shift from modeling the speed at which a display is processed, to modeling the speed at which different features of the display are perceived and formalizing how this perceptual information is used in decision making. The first of these models(Lamberts, 1995) was implemented to fit the time course of performance in a speeded perceptual categorization task and assumed a simple stochastic accumulation of feature information. Subsequently, similar approaches have been used to model performance in a range of cognitive tasks including identification, absolute identification, perceptual matching, recognition, visual search, and word processing, again assuming a simple stochastic accumulation of feature information from both the stimulus and representations held in memory. These models are typically fit to data from signal-to-respond experiments whereby the effects of stimulus exposure duration on performance are examined, but response times (RTs) and RT distributions have also been modeled. In this article, we review this approach and explore the insights it has provided about the interplay between perceptual processing, memory retrieval, and decision making in a variety of tasks. In so doing, we highlight how such approaches can continue to usefully contribute to our understanding of cognition

    Implicit learning of recursive context-free grammars

    Get PDF
    Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning

    Unitization during Category Learning

    Get PDF
    Five experiments explored the question of whether new perceptual units can be developed if they are diagnostic for a category learning task, and if so, what are the constraints on this unitization process? During category learning, participants were required to attend either a single component or a conjunction of five components in order to correctly categorize an object. In Experiments 1-4, some evidence for unitization was found in that the conjunctive task becomes much easier with practice, and this improvement was not found for the single component task, or for conjunctive tasks where the components cannot be unitized. Influences of component order (Experiment 1), component contiguity (Experiment 2), component proximity (Experiment 3), and number of components (Experiment 4) on practice effects were found. Using a Fourier Transformation method for deconvolving response times (Experiment 5), prolonged practice effects yielded responses that were faster than expected by analytic model that integrate evidence from independently perceived components

    Brain Categorization: Learning, Attention, and Consciousness

    Full text link
    How do humans and animals learn to recognize objects and events? Two classical views are that exemplars or prototypes are learned. A hybrid view is that a mixture, called rule-plus-exceptions, is learned. None of these models learn their categories. A distributed ARTMAP neural network with self-supervised learning incrementally learns categories that match human learning data on a class of thirty diagnostic experiments called the 5-4 category structure. Key predictions of ART models have received behavioral, neurophysiological, and anatomical support. The ART prediction about what goes wrong during amnesic learning has also been supported: A lesion in its orienting system causes a low vigilance parameter.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-01-1-0624), the National Geospatial Intelligence Agency (NMA 201-01-1-2016); National Science Foundation (EIA-01-30851, IIS-97-20333, SBE-0354378); Office of Naval Research (N00014-95-1-0657, N00014-01-1-0624
    corecore