142 research outputs found

    Saccadic Predictive Vision Model with a Fovea

    Full text link
    We propose a model that emulates saccades, the rapid movements of the eye, called the Error Saccade Model, based on the prediction error of the Predictive Vision Model (PVM). The Error Saccade Model carries out movements of the model's field of view to regions with the highest prediction error. Comparisons of the Error Saccade Model on Predictive Vision Models with and without a fovea show that a fovea-like structure in the input level of the PVM improves the Error Saccade Model's ability to pursue detailed objects in its view. We hypothesize that the improvement is due to poorer resolution in the periphery causing higher prediction error when an object passes, triggering a saccade to the next location.Comment: 10 pages, 6 figure, Accepted in International Conference of Neuromorphic Computing (2018

    Learnings from a Retail Recommendation System on Billions of Interactions at bol.com

    Get PDF
    Recommender systems are ubiquitous in the modern internet, where they help users find items they might like. We discuss the design of a large-scale recommender system handling billions of interactions on a European e-commerce platform.We present two studies on enhancing the predictive performance of this system with both algorithmic and systems-related approaches. First, we evaluate neural network-based approaches on proprietary data from our e-commerce platform, and confirm recent results outlining that the benefits of these methods with respect to predictive performance are limited, while they exhibit severe scalability bottlenecks. Next, we investigate the impact of a reduction of the response latency of our serving system, and conduct an A/B test on the live platform with more than 19 million user sessions, which confirms that the latency reduction of the recommender system correlates with a significant increase in business-relevant metrics. We discuss the implications of our findings with respect to real world recommendation systems and future research on scalable session-based recommendation

    Learning to Embed Words in Context for Syntactic Tasks

    Full text link
    We present models for embedding words in the context of surrounding words. Such models, which we refer to as token embeddings, represent the characteristics of a word that are specific to a given context, such as word sense, syntactic category, and semantic role. We explore simple, efficient token embedding models based on standard neural network architectures. We learn token embeddings on a large amount of unannotated text and evaluate them as features for part-of-speech taggers and dependency parsers trained on much smaller amounts of annotated data. We find that predictors endowed with token embeddings consistently outperform baseline predictors across a range of context window and training set sizes.Comment: Accepted by ACL 2017 Repl4NLP worksho

    Neural Network Attributions: A Causal Perspective

    Full text link
    We propose a new attribution method for neural networks developed using first principles of causality (to the best of our knowledge, the first such). The neural network architecture is viewed as a Structural Causal Model, and a methodology to compute the causal effect of each feature on the output is presented. With reasonable assumptions on the causal structure of the input data, we propose algorithms to efficiently compute the causal effects, as well as scale the approach to data with large dimensionality. We also show how this method can be used for recurrent neural networks. We report experimental results on both simulated and real datasets showcasing the promise and usefulness of the proposed algorithm.Comment: 17 pages, 10 Figures. Accepted in the Proceedings of the 36th International Conference on Machine Learning (ICML2019). Modifications: Added github link to code and fixed a typo in Fig.
    corecore