56 research outputs found

    Continual machine learning for non-stationary data analysis

    Get PDF
    Although deep learning models have achieved significant successes in various fields, most of them have limited capacity in learning multiple tasks sequentially. The issue of forgetting the previously learned tasks in continual learning is known as catastrophic forgetting or interference. When the input data or the goal of learning changes, a conventional machine learning model will learn and adapt to the new status. However, the model will not remember or recognise any revisits to the previous states. This causes performance reduction and re-training curves in dealing with periodic or irregularly reoccurring changes in the data or goals. Without continual learning ability, one cannot deploy an adaptive machine learning model in a changing environment. This thesis investigates the continual learning and mitigating the catastrophic forgetting problem in neural networks. We assume non-stationary data contains multiple different tasks which are coming in sequence and will not be stored. We propose a regularisation method, which is to identify and penalise the changes of important parameters of previous tasks while learning a new one. However, when the number of tasks is sufficiently large, this method cannot preserve all the previously learned knowledge, or it impedes the integration of new knowledge. This is also known as the stability-plasticity dilemma. To solve this problem, we proposed a replay method based on Generative Adversarial Networks (GANs). Different from other replay methods, the proposed model is not bounded by the fitting capacity of the generator. However, the number of parameters increases rapidly as the number of learned tasks grows. Therefore, we propose a continual learning model based on Bayesian neural networks and a Mixture of Experts (MoE) framework. The proposed model integrates different experts which are responsible for different tasks into a giant model. Previously knowledge is preserved, and new tasks can be efficiently learned by assigning new experts. Based on Monte-Carlo Sampling, the performance is not satisfied. To address this issue, we propose a Probabilistic Neural Network (PNN) and integrate it with a conventional neural network. The PNN can produce the likelihood given input and be used in a variety of fields. To apply continual learning methods to real-world applications, we then propose a semi-supervised learning model to analyse healthcare datasets. The proposed framework extracts the general features from unlabelled data. We integrate the PNN into the framework to classify the data, which includes a smaller set of labelled samples and continually learn the new cases. The proposed model has been tested on benchmark datasets and also a real-world clinical dataset. The results showed that our proposed model outperforms the state-of-the-art models without requiring prior knowledge of the tasks and overall accuracy of the continual learning. The experiments on the real-world clinical data were designed to identify the risk of Urinary Tract Infections (UTIs) using in-home monitoring data. The UTI risk analysis model has been deployed in a digital platform and is currently part of the on-going Minder clinical study at the UK Dementia Research Institute (UK DRI). An earlier version of the model was deployed as a part of a Class-I CE marked medical device. The UK DRI Minder platform and the deployed machine learning models, including the UTI risk analysis model developed in this research, are in the process to be accredited as a Class-IIa medical device. Overall, this PhD research tackles theoretical and applied challenges of continuous learning models in dealing with real-world data. We evaluate the proposed continual learning methods in a variety of benchmarks with comprehensive analysis and show their effectiveness. Furthermore, we have applied the proposed methods in real-world applications and demonstrated the applicability of the models to real-world settings and clinical problems.Open Acces

    Continual Learning with Adaptive Weights (CLAW)

    Get PDF
    Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting

    Look-ahead meta-learning for continual learning

    Full text link
    Le problème “d’apprentissage continu” implique l’entraînement des modèles profonds avec une capacité limitée qui doivent bien fonctionner sur un nombre inconnu de tâches arrivant séquentiellement. Cette configuration peut souvent résulter en un système d’apprentissage qui souffre de “l’oublie catastrophique”, lorsque l’apprentissage d’une nouvelle tâche provoque des interférences sur la progression de l’apprentissage des anciennes tâches. Les travaux récents ont montré que les techniques de “méta-apprentissage” ont le potentiel de ré- duire les interférences entre les anciennes et les nouvelles tâches. Cependant, les procé- dures d’entraînement ont présentement une tendance à être lente ou hors ligne et sensibles à de nombreux hyperparamètres. Dans ce travail, nous proposons “Look-ahead MAML (La-MAML)”, un algorithme de méta-apprentissage rapide basé sur l’optimisation pour l’apprentissage continu en ligne et aidé par une petite mémoire épisodique. Ceci est réalisé en utilisant l’équivalence d’un objectif MAML en plusieurs étapes et un objectif d’apprentissage continu “temps conscient”. L’équivalence résulte au développement d’un algorithme intuitif que nous appelons Continual-MAML (C-MAML), utilisant un méta-apprentissage continu pour optimiser un modèle afin qu’il fonctionne bien sur une série de distributions de don- nées changeantes. En intégrant la modulation des taux d’apprentissage par paramètre dans La-MAML, notre approche fournit un moyen plus flexible et efficace d’atténuer l’oubli catas- trophique par rapport aux méthodes classiques basées sur les prieurs. Cette modulation a également des liens avec des travaux sur la métadescendance, que nous identifions comme une direction importante de la recherche pour développer de meilleurs optimiser pour un ap- prentissage continu. Dans des expériences menées sur des repères de classification visuelle du monde réel, La-MAML atteint des performances supérieures aux autres approches basées sur la relecture, basées sur les prieurs et basées sur le méta-apprentissage pour un apprentissage continu. Nous démontrons également qu’elle est robuste et plus évolutive que de nombreuses approches de pointe.The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. This setup can of- ten see a learning system undergo catastrophic forgetting, when learning a newly seen task causes interference on the learning progress of old tasks. While recent work has shown that meta-learning has the potential to reduce interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta- learning algorithm for online-continual learning, aided by a small episodic memory. This is achieved by realising the equivalence of a multi-step MAML objective to a time-aware con- tinual learning objective adopted in prior work. The equivalence leads to the formulation of an intuitive algorithm that we call Continual-MAML (C-MAML), employing continual meta- learning to optimise a model to perform well across a series of changing data distributions. By additionally incorporating the modulation of per-parameter learning rates in La-MAML, our approach provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. This modulation also has connections to prior work on meta-descent, which we identify as an important direction of research to de- velop better optimizers for continual learning. In experiments conducted on real-world visual classification benchmarks, La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning. We also demonstrate that it is robust, and more scalable than many recent state-of-the-art approaches

    Intent prediction of vulnerable road users for trusted autonomous vehicles

    Full text link
    This study investigated how future autonomous vehicles could be further trusted by vulnerable road users (such as pedestrians and cyclists) that they would be interacting with in urban traffic environments. It focused on understanding the behaviours of such road users on a deeper level by predicting their future intentions based solely on vehicle-based sensors and AI techniques. The findings showed that personal/body language attributes of vulnerable road users besides their past motion trajectories and physics attributes in the environment led to more accurate predictions about their intended actions

    Optimisation for efficient deep learning

    Get PDF
    Over the past 10 years there has been a huge advance in the performance power of deep neural networks on many supervised learning tasks. Over this period these models have redefined the state of the art numerous times on many classic machine vision and natural language processing benchmarks. Deep neural networks have also found their way into many real-world applications including chat bots, art generation, voice activated virtual assistants, surveillance, and medical diagnosis systems. Much of the improved performance of these models can be attributed to an increase in scale, which in turn has raised computation and energy costs. In this thesis we detail approaches of how to reduce the cost of deploying deep neural networks in various settings. We first focus on training efficiency, and to that end we present two optimisation techniques that produce high accuracy models without extensive tuning. These optimisers only have a single fixed maximal step size hyperparameter to cross-validate and we demonstrate that they outperform other comparable methods in a wide range of settings. These approaches do not require the onerous process of finding a good learning rate schedule, which often requires training many versions of the same network, hence they reduce the computation needed. The first of these optimisers is a novel bundle method designed for the interpolation setting. The second demonstrates the effectiveness of a Polyak-like step size in combination with an online estimate of the optimal loss value in the non-interpolating setting. Next, we turn our attention to training efficient binary networks with both binary parameters and activations. With the right implementation, fully binary networks are highly efficient at inference time, as they can replace the majority of operations with cheaper bit-wise alternatives. This makes them well suited for lightweight or embedded applications. Due to the discrete nature of these models conventional training approaches are not viable. We present a simple and effective alternative to the existing optimisation techniques for these models
    • …
    corecore