2 research outputs found

    Unsupervised generative variational continual learning

    No full text
    Continual learning aims at learning a sequence of tasks without forgetting any task. There are mainly three categories in this field: replay methods, regularization-based methods, and parameter isolation methods. Recent research in continual learning generally incorporates two of these methods to obtain better performance. This dissertation combined regularization-based methods and parameter isolation methods to ensure the important parameters for each task do not change drastically and free up unimportant parameters so the network is capable to learn new knowledge. While most of the existing literature on continual learning is aimed at class incremental learning in a supervised setting, there is enormous potential for unsupervised continual learning using generative models. This dissertation proposes a combination of architectural pruning and network expansion in generative variational models toward unsupervised generative continual learning (UGCL). Evaluations on standard benchmark data sets demonstrate the superior generative ability of the proposed method.Master of Science (Computer Control and Automation
    corecore