40,946 research outputs found

    Linking Visual Development and Learning to Information Processing: Preattentive and Attentive Brain Dynamics

    Full text link
    National Science Foundation (SBE-0354378); Office of Naval Research (N00014-95-1-0657

    Sequential Recommendation with Self-Attentive Multi-Adversarial Network

    Full text link
    Recently, deep learning has made significant progress in the task of sequential recommendation. Existing neural sequential recommenders typically adopt a generative way trained with Maximum Likelihood Estimation (MLE). When context information (called factor) is involved, it is difficult to analyze when and how each individual factor would affect the final recommendation performance. For this purpose, we take a new perspective and introduce adversarial learning to sequential recommendation. In this paper, we present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation. Specifically, our proposed MFGAN has two kinds of modules: a Transformer-based generator taking user behavior sequences as input to recommend the possible next items, and multiple factor-specific discriminators to evaluate the generated sub-sequence from the perspectives of different factors. To learn the parameters, we adopt the classic policy gradient method, and utilize the reward signal of discriminators for guiding the learning of the generator. Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time. Extensive experiments conducted on three real-world datasets demonstrate the superiority of our proposed model over the state-of-the-art methods, in terms of effectiveness and interpretability

    The Laminar Architecture of Visual Cortex and Image Processing Technology

    Full text link
    The mammalian neocortex is organized into layers which include circuits that form functional columns in cortical maps. A major unsolved problem concerns how bottom-up, top-down, and horizontal interactions are organized within cortical layers to generate adaptive behaviors. This article summarizes a model, called the LAMINART model, of how these interactions help visual cortex to realize: (1) the binding process whereby cortex groups distributed data into coherent object representations; (2) the attentional process whereby cortex selectively processes important events; and (3) the developmental and learning processes whereby cortex stably grows and tunes its circuits to match environmental constraints. Such Laminar Computing completes perceptual groupings that realize the property of Analog Coherence, whereby winning groupings bind together their inducing features without losing their ability to represent analog values of these features. Laminar Computing also efficiently unifies the computational requirements of preattentive filtering and grouping with those of attentional selection. It hereby shows how Adaptive Resonance Theory (ART) principles may be realized within the laminar circuits of neocortex. Applications include boundary segmentation and surface filling-in algorithms for processing Synthetic Aperture Radar images.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); Office of Naval Research (N00014-95-1-0657

    From perception to action and vice versa: a new architecture showing how perception and action can modulate each other simultaneously

    Get PDF
    Presentado en: 6th European Conference on Mobile Robots (ECMR) Sep 25-27, 2013 Barcelona, SpainArtificial vision systems can not process all the information that they receive from the world in real time because it is highly expensive and inefficient in terms of computational cost. However, inspired by biological perception systems, it is possible to develop an artificial attention model able to select only the relevant part of the scene, as human vision does. From the Automated Planning point of view, a relevant area can be seen as an area where the objects involved in the execution of a plan are located. Thus, the planning system should guide the attention model to track relevant objects. But, at the same time, the perceived objects may constrain or provide new information that could suggest the modification of a current plan. Therefore, a plan that is being executed should be adapted or recomputed taking into account actual information perceived from the world. In this work, we introduce an architecture that creates a symbiosis between the planning and the attention modules of a robotic system, linking visual features with high level behaviours. The architecture is based on the interaction of an oversubscription planner, that produces plans constrained by the information perceived from the vision system, and an object-based attention system, able to focus on the relevant objects of the plan being executed.Spanish MINECO projects TIN2008-06196, TIN2012-38079-C03-03 and TIN2012-38079-C03-02. Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tec

    Attentive Tensor Product Learning

    Full text link
    This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach

    The Laminar Organization of Visual Cortex: A Unified View of Development, Learning, and Grouping

    Full text link
    Why are all sensory and cognitive neocortex organized into layered circuits? How do these layers organize circuits that form functional columns in cortical maps? How do bottom-up, top-down, and horizontal interactions within the cortical layers generate adaptive behaviors. This chapter summarizes an evolving neural model which suggests how these interactions help the visual cortex to realize: (1) the binding process whereby cortex groups distributed data into coherent object representations; (2) the attentional process whereby cortex selectively processes important events; and (3) the developmental and learning processes whereby cortex shapes its circuits to match environmental constraints. It is suggested that the mechanisms which achieve property (3) imply properties of (I) and (2). New computational ideas about feedback systems suggest how neocortex develops and learns in a stable way, and why top-down attention requires converging bottom-up inputs to fully activate cortical cells, whereas perceptual groupings do not.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657
    • …
    corecore