4,243 research outputs found

    Scalable Recollections for Continual Lifelong Learning

    Full text link
    Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn online over a continuous stream of non-stationary data. A successful continual lifelong learning system must have three key capabilities: it must learn and adapt over time, it must not forget what it has learned, and it must be efficient in both training time and memory. Recent techniques have focused their efforts primarily on the first two capabilities while questions of efficiency remain largely unexplored. In this paper, we consider the problem of efficient and effective storage of experiences over very large time-frames. In particular we consider the case where typical experiences are O(n) bits and memories are limited to O(k) bits for k << n. We present a novel scalable architecture and training algorithm in this challenging domain and provide an extensive evaluation of its performance. Our results show that we can achieve considerable gains on top of state-of-the-art methods such as GEM.Comment: AAAI 201

    Towards minimizing the energy of slack variables for binary classification

    Get PDF
    This paper presents a binary classification algorithm that is based on the minimization of the energy of slack variables, called the Mean Squared Slack (MSS). A novel kernel extension is proposed which includes the withholding of just a subset of input patterns that are misclassified during training. The later leads to a time and memory efficient system that converges in a few iterations. Two datasets are exploited for performance evaluation, namely the adult and the vertebral column dataset. Experimental results demonstrate the effectiveness of the proposed algorithm with respect to computation time and scalability. Accuracy is also high. In specific, it equals 84.951% for the adult dataset and 91.935%, for the vertebral column dataset, outperforming state-of-the-art methods. © 2012 EURASIP
    corecore