102 research outputs found

    Cortical Learning of Recognition Categories: A Resolution of the Exemplar Vs. Prototype Debate

    Full text link
    Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.Defense Advanced Projects Research Agency SyNaPSE program (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC #801881-BS under HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (NSF SBE-0354378

    Event Fisher Vectors: Robust Encoding Visual Diversity of Visual Streams

    Get PDF

    Exploratory vs. Model-based Mobility Analysis

    Get PDF
    In this paper we describe and analyze a visual analytic process based on interactive visualization methods, clustering, and various forms of user knowledge. We compare this analysis approach to an existing map overlay type model, which has been developed through a traditional modeling approach. In the traditional model the layers represent input data sets and each layer is weighted according to their importance for the result. The aim in map overlay is to identify the best fit areas for the purpose in question. The more generic view is that map overlay reveals the similarity of the areas. Thus an interactive process, which uses clustering, seems to be an alternative method that could be used when the analysis needs to be made rapidly and utilizing whatever data is available. Our method uses visual analytic approach and data mining, and utilizes the user knowledge whenever a decision must be made. The tests carried out show that our method gives acceptable results for the cross-country mobility problem, and fulfills the given requirements about the computational efficiency. The method fits especially to the situations in which available data is incomplete and of low quality and must be completed by the user knowledge. The transparency of the process makes the method suitable also in situations when results based on various user opinions and values must be made. The case in our research is from the crisis management application area in which the above mentioned conditions often take place

    Interpreting Encoding and Decoding Models

    Get PDF
    Encoding and decoding models are widely used in systems, cognitive, and computational neuroscience to make sense of brain-activity data. However, the interpretation of their results requires care. Decoding models can help reveal whether particular information is present in a brain region in a format the decoder can exploit. Encoding models make comprehensive predictions about representational spaces. In the context of sensory systems, encoding models enable us to test and compare brain-computational models, and thus directly constrain computational theory. Encoding and decoding models typically include fitted linear-model components. Sometimes the weights of the fitted linear combinations are interpreted as reflecting, in an encoding model, the contribution of different sensory features to the representation or, in a decoding model, the contribution of different measured brain responses to a decoded feature. Such interpretations can be problematic when the predictor variables or their noise components are correlated and when priors (or penalties) are used to regularize the fit. Encoding and decoding models are evaluated in terms of their generalization performance. The correct interpretation depends on the level of generalization a model achieves (e.g. to new response measurements for the same stimuli, to new stimuli from the same population, or to stimuli from a different population). Significant decoding or encoding performance of a single model (at whatever level of generality) does not provide strong constraints for theory. Many models must be tested and inferentially compared for analyses to drive theoretical progress.Comment: 19 pages, 2 figures, author preprin

    Natasha - um sistema de agrupamento de histórias de usuários por personas e desejos

    Get PDF
    TCC (graduação) - Universidade Federal de Santa Catarina, Centro Tecnológico, Sistemas de Informação.O uso de metodologias ágeis no processo de desenvolvimento de software tem se popularizado nas últimas décadas e, com isso, o uso de histórias de usuários para representar os requisitos do ponto de vista dos usuários também se popularizou. Porém, uma vez que histórias de usuários são escritas por seres humanos e em linguagem natural, as mesmas estão propensas a diversos erros, como incompletude e inconsistência, além da provável existência de histórias que representam o mesmo requisito, mas que estão descritas de formas diferentes. Detectar tais inconsistências, apesar de ser uma tarefa fácil para seres humanos, é algo maçante e, em grandes conjuntos de histórias de usuários, acaba exigindo muito tempo e esforço. Assim, este projeto tem como objetivo o desenvolvimento de uma ferramenta web que permita a detecção e exibição de histórias de usuários similares, visando facilitar e agilizar o processo de desenvolvimento de software. Para tal, foram aplicados os algoritmos K-Means, Hierárquico Aglomerativo, DBSCAN e GMM, mostrando-se úteis para uma análise e teste de hipótese exploratórios

    Large-Margin Learning of Compact Binary Image Encodings

    Full text link
    corecore