139,850 research outputs found

    Adaptive Regularization for Class-Incremental Learning

    Full text link
    Class-Incremental Learning updates a deep classifier with new categories while maintaining the previously observed class accuracy. Regularizing the neural network weights is a common method to prevent forgetting previously learned classes while learning novel ones. However, existing regularizers use a constant magnitude throughout the learning sessions, which may not reflect the varying levels of difficulty of the tasks encountered during incremental learning. This study investigates the necessity of adaptive regularization in Class-Incremental Learning, which dynamically adjusts the regularization strength according to the complexity of the task at hand. We propose a Bayesian Optimization-based approach to automatically determine the optimal regularization magnitude for each learning task. Our experiments on two datasets via two regularizers demonstrate the importance of adaptive regularization for achieving accurate and less forgetful visual incremental learning

    Experimental study on population-based incremental learning algorithms for dynamic optimization problems

    Get PDF
    Copyright @ Springer-Verlag 2005.Evolutionary algorithms have been widely used for stationary optimization problems. However, the environments of real world problems are often dynamic. This seriously challenges traditional evolutionary algorithms. In this paper, the application of population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic problems is investigated. Inspired by the complementarity mechanism in nature a Dual PBIL is proposed, which operates on two probability vectors that are dual to each other with respect to the central point in the genotype space. A diversity maintaining technique of combining the central probability vector into PBIL is also proposed to improve PBILs adaptability in dynamic environments. In this paper, a new dynamic problem generator that can create required dynamics from any binary-encoded stationary problem is also formalized. Using this generator, a series of dynamic problems were systematically constructed from several benchmark stationary problems and an experimental study was carried out to compare the performance of several PBIL algorithms and two variants of standard genetic algorithm. Based on the experimental results, we carried out algorithm performance analysis regarding the weakness and strength of studied PBIL algorithms and identified several potential improvements to PBIL for dynamic optimization problems.This work was was supported by UK EPSRC under Grant GR/S79718/01

    Incremental eigenpair computation for graph Laplacian matrices: theory and applications

    Get PDF
    The smallest eigenvalues and the associated eigenvectors (i.e., eigenpairs) of a graph Laplacian matrix have been widely used for spectral clustering and community detection. However, in real-life applications, the number of clusters or communities (say, K) is generally unknown a priori. Consequently, the majority of the existing methods either choose K heuristically or they repeat the clustering method with different choices of K and accept the best clustering result. The first option, more often, yields suboptimal result, while the second option is computationally expensive. In this work, we propose an incremental method for constructing the eigenspectrum of the graph Laplacian matrix. This method leverages the eigenstructure of graph Laplacian matrix to obtain the Kth smallest eigenpair of the Laplacian matrix given a collection of all previously compute

    Efficient training algorithms for HMMs using incremental estimation

    Get PDF
    Typically, parameter estimation for a hidden Markov model (HMM) is performed using an expectation-maximization (EM) algorithm with the maximum-likelihood (ML) criterion. The EM algorithm is an iterative scheme that is well-defined and numerically stable, but convergence may require a large number of iterations. For speech recognition systems utilizing large amounts of training material, this results in long training times. This paper presents an incremental estimation approach to speed-up the training of HMMs without any loss of recognition performance. The algorithm selects a subset of data from the training set, updates the model parameters based on the subset, and then iterates the process until convergence of the parameters. The advantage of this approach is a substantial increase in the number of iterations of the EM algorithm per training token, which leads to faster training. In order to achieve reliable estimation from a small fraction of the complete data set at each iteration, two training criteria are studied; ML and maximum a posteriori (MAP) estimation. Experimental results show that the training of the incremental algorithms is substantially faster than the conventional (batch) method and suffers no loss of recognition performance. Furthermore, the incremental MAP based training algorithm improves performance over the batch versio

    Scalable Recollections for Continual Lifelong Learning

    Full text link
    Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn online over a continuous stream of non-stationary data. A successful continual lifelong learning system must have three key capabilities: it must learn and adapt over time, it must not forget what it has learned, and it must be efficient in both training time and memory. Recent techniques have focused their efforts primarily on the first two capabilities while questions of efficiency remain largely unexplored. In this paper, we consider the problem of efficient and effective storage of experiences over very large time-frames. In particular we consider the case where typical experiences are O(n) bits and memories are limited to O(k) bits for k << n. We present a novel scalable architecture and training algorithm in this challenging domain and provide an extensive evaluation of its performance. Our results show that we can achieve considerable gains on top of state-of-the-art methods such as GEM.Comment: AAAI 201

    Chapter Facilitating Comparative Group Work in Adult Education

    Get PDF
    The purpose of this chapter is to describe and reflect on scholarly-based practices that can help facilitate comparative group work within the international and transnational context of adult education. The first section of this chapter situates comparative group work within the larger context of comparative adult education, followed by a focus on how to facilitate a group of diverse learners with different societal and cultural experiences. The chapter emphasiszes an outcome-based approach, describing how to set up incremental learning outcomes to enable comparative group work to be successful; a team-based approach, elaborating on coaching strategies to facilitate comparative work group; and a strength-based approach about adult learner-centered strategies for engagement, empowerment, mentoring, collaboration, fun, and accountability when facilitating comparative group work
    corecore