572 research outputs found

    Visual Learning Beyond Human Curated Datasets

    Get PDF
    The success of deep neural networks in a variety of computer vision tasks heavily relies on large- scale datasets. However, it is expensive to manually acquire labels for large datasets. Given the human annotation cost and scarcity of data, the challenge is to learn efficiently with insufficiently labeled data. In this dissertation, we propose several approaches towards data-efficient learning in the context of few-shot learning, long-tailed visual recognition, and unsupervised and semi-supervised learning. In the first part, we propose a novel paradigm of Task-Agnostic Meta- Learning (TAML) algorithms to improve few-shot learning. Furthermore, in the second part, we analyze the long-tailed problem from a domain adaptation perspective and propose to augment the classic class-balanced learning for longtails by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach. Following this, we propose our lazy approach based on an intuitive teacher-student scheme to enable the gradient-based meta- learning algorithms to explore long horizons. Finally, in the third part, we propose a novel face detector adaptation approach that is applicable whenever the target domain supplies many representative images, no matter they are labeled or not. Experiments on several benchmark datasets verify the efficacy of the proposed methods under all settings

    A Survey on Negative Transfer

    Full text link
    Transfer learning (TL) tries to utilize data or knowledge from one or more source domains to facilitate the learning in a target domain. It is particularly useful when the target domain has few or no labeled data, due to annotation expense, privacy concerns, etc. Unfortunately, the effectiveness of TL is not always guaranteed. Negative transfer (NT), i.e., the source domain data/knowledge cause reduced learning performance in the target domain, has been a long-standing and challenging problem in TL. Various approaches to handle NT have been proposed in the literature. However, this filed lacks a systematic survey on the formalization of NT, their factors and the algorithms that handle NT. This paper proposes to fill this gap. First, the definition of negative transfer is considered and a taxonomy of the factors are discussed. Then, near fifty representative approaches for handling NT are categorized and reviewed, from four perspectives: secure transfer, domain similarity estimation, distant transfer and negative transfer mitigation. NT in related fields, e.g., multi-task learning, lifelong learning, and adversarial attacks are also discussed

    Deep knowledge transfer for generalization across tasks and domains under data scarcity

    Get PDF
    Over the last decade, deep learning approaches have achieved tremendous performance in a wide variety of fields, e.g., computer vision and natural language understanding, and across several sectors such as healthcare, industrial manufacturing, and driverless mobility. Most deep learning successes were accomplished in learning scenarios fulfilling the two following requirements. First, large amounts of data are available for training the deep learning model and there are no access restrictions to the data. Second, the data used for training and testing is independent and identically distributed (i.i.d.). However, many real-world applications infringe at least one of the aforementioned requirements, which results in challenging learning problems. The present thesis comprises four contributions to address four such learning problems. In each contribution, we propose a novel method and empirically demonstrate its effectiveness for the corresponding problem setting. The first part addresses the underexplored intersection of the few-shot learning and the one-class classification problems. In this learning scenario, the model has to learn a new task using only a few examples from only the majority class, without overfitting to the few examples or to the majority class. This learning scenario is faced in real-world applications of anomaly detection where data is scarce. We propose an episode sampling technique to adapt meta-learning algorithms designed for class-balanced few-shot classification to the addressed few-shot one-class classification problem. This is done by optimizing for a model initialization tailored for the addressed scenario. In addition, we provide theoretical and empirical analyses to investigate the need for second-order derivatives to learn such parameter initializations. Our experiments on 8 image and time-series datasets, including a real-world dataset of industrial sensor readings, demonstrate the effectiveness of our method. The second part tackles the intersection of the continual learning and the anomaly detection problems, which we are the first to explore, to the best of our knowledge. In this learning scenario, the model is exposed to a stream of anomaly detection tasks, i.e., only examples from the normal class are available, that it has to learn sequentially. Such problem settings are encountered in anomaly detection applications where the data distribution continuously changes. We propose a meta-learning approach that learns parameter-specific initializations and learning rates suitable for continual anomaly detection. Our empirical evaluations show that a model trained with our algorithm is able to learn up 100 anomaly detection tasks sequentially with minimal catastrophic forgetting and overfitting to the majority class. In the third part, we address the domain generalization problem, in which a model trained on several source domains is expected to generalize well to data from a previously unseen target domain, without any modification or exposure to its data. This challenging learning scenario is present in applications involving domain shift, e.g., different clinical centers using different MRI scanners or data acquisition protocols. We assume that learning to extract a richer set of features improves the transfer to a wider set of unknown domains. Motivated by this, we propose an algorithm that identifies the already learned features and corrupts them, hence enforcing new feature discovery. We leverage methods from the explainable machine learning literature to identify the features, and apply the targeted corruption on multiple representation levels, including input data and high-level embeddings. Our extensive empirical evaluation shows that our approach outperforms 18 domain generalization algorithms on multiple benchmark datasets. The last part of the thesis addresses the intersection of domain generalization and data-free learning methods, which we are the first to explore, to the best of our knowledge. Hereby, we address the learning scenario where a model robust to domain shift is needed and only models trained on the same task but different domains are available instead of the original datasets. This learning scenario is relevant for any domain generalization application where the access to the data of the source domains is restricted, e.g., due to concerns about data privacy concerns or intellectual property infringement. We develop an approach that extracts and fuses domain-specific knowledge from the available teacher models into a student model robust to domain shift, by generating synthetic cross-domain data. Our empirical evaluation demonstrates the effectiveness of our method which outperforms ensemble and data-free knowledge distillation baselines. Most importantly, the proposed approach substantially reduces the gap between the best data-free baseline and the upper-bound baseline that uses the original private data

    Incremental Learning Through Unsupervised Adaptation in Video Face Recognition

    Get PDF
    Programa Oficial de Doutoramento en Investigación en Tecnoloxías da Información. 524V01[Resumo] Durante a última década, os métodos baseados en deep learning trouxeron un salto significativo no rendemento dos sistemas de visión artificial. Unha das claves neste éxito foi a creación de grandes conxuntos de datos perfectamente etiquetados para usar durante o adestramento. En certa forma, as redes de deep learning resumen esta enorme cantidade datos en prácticos vectores multidimensionais. Por este motivo, cando as diferenzas entre os datos de adestramento e os adquiridos durante o funcionamento dos sistemas (debido a factores como o contexto de adquisición) son especialmente notorias, as redes de deep learning son susceptibles de sufrir degradación no rendemento. Mentres que a solución inmediata a este tipo de problemas sería a de recorrer a unha recolección adicional de imaxes, co seu correspondente proceso de etiquetado, esta dista moito de ser óptima. A gran cantidade de posibles variacións que presenta o mundo visual converten rápido este enfoque nunha tarefa sen fin. Máis aínda cando existen aplicacións específicas nas que esta acción é difícil, ou incluso imposible, de realizar debido a problemas de custos ou de privacidade. Esta tese propón abordar todos estes problemas usando a perspectiva da adaptación. Así, a hipótese central consiste en asumir que é posible utilizar os datos non etiquetados adquiridos durante o funcionamento para mellorar o rendemento que obteríamos con sistemas de recoñecemento xerais. Para isto, e como proba de concepto, o campo de estudo da tese restrinxiuse ao recoñecemento de caras. Esta é unha aplicación paradigmática na cal o contexto de adquisición pode ser especialmente relevante. Este traballo comeza examinando as diferenzas intrínsecas entre algúns dos contextos específicos nos que se pode necesitar o recoñecemento de caras e como estas afectan ao rendemento. Desta maneira, comparamos distintas bases de datos (xunto cos seus contextos) entre elas, usando algúns dos descritores de características máis avanzados e así determinar a necesidade real de adaptación. A partir desta punto, pasamos a presentar o método novo, que representa a principal contribución da tese: o Dynamic Ensemble of SVM (De-SVM). Este método implementa a capacidade de adaptación utilizando unha aprendizaxe incremental non supervisada na que as súas propias predicións se usan como pseudo-etiquetas durante as actualizacións (a estratexia de auto-adestramento). Os experimentos realizáronse baixo condicións de vídeo-vixilancia, un exemplo paradigmático dun contexto moi específico no que os procesos de etiquetado son particularmente complicados. As ideas claves de De-SVM probáronse en diferentes sub-problemas de recoñecemento de caras: a verificación de caras e recoñecemento de caras en conxunto pechado e en conxunto aberto. Os resultados acadados mostran un comportamento prometedor en termos de adquisición de coñecemento sen supervisión así como robustez contra impostores. Ademais, este rendemento é capaz de superar a outros métodos do estado da arte que non posúen esta capacidade de adaptación.[Resumen] Durante la última década, los métodos basados en deep learning trajeron un salto significativo en el rendimiento de los sistemas de visión artificial. Una de las claves en este éxito fue la creación de grandes conjuntos de datos perfectamente etiquetados para usar durante el entrenamiento. En cierta forma, las redes de deep learning resumen esta enorme cantidad datos en prácticos vectores multidimensionales. Por este motivo, cuando las diferencias entre los datos de entrenamiento y los adquiridos durante el funcionamiento de los sistemas (debido a factores como el contexto de adquisición) son especialmente notorias, las redes de deep learning son susceptibles de sufrir degradación en el rendimiento. Mientras que la solución a este tipo de problemas es recurrir a una recolección adicional de imágenes, con su correspondiente proceso de etiquetado, esta dista mucho de ser óptima. La gran cantidad de posibles variaciones que presenta el mundo visual convierten rápido este enfoque en una tarea sin fin. Más aún cuando existen aplicaciones específicas en las que esta acción es difícil, o incluso imposible, de realizar; debido a problemas de costes o de privacidad. Esta tesis propone abordar todos estos problemas usando la perspectiva de la adaptación. Así, la hipótesis central consiste en asumir que es posible utilizar los datos no etiquetados adquiridos durante el funcionamiento para mejorar el rendimiento que se obtendría con sistemas de reconocimiento generales. Para esto, y como prueba de concepto, el campo de estudio de la tesis se restringió al reconocimiento de caras. Esta es una aplicación paradigmática en la cual el contexto de adquisición puede ser especialmente relevante. Este trabajo comienza examinando las diferencias entre algunos de los contextos específicos en los que se puede necesitar el reconocimiento de caras y así como sus efectos en términos de rendimiento. De esta manera, comparamos distintas ba ses de datos (y sus contextos) entre ellas, usando algunos de los descriptores de características más avanzados para así determinar la necesidad real de adaptación. A partir de este punto, pasamos a presentar el nuevo método, que representa la principal contribución de la tesis: el Dynamic Ensemble of SVM (De- SVM). Este método implementa la capacidad de adaptación utilizando un aprendizaje incremental no supervisado en la que sus propias predicciones se usan cómo pseudo-etiquetas durante las actualizaciones (la estrategia de auto-entrenamiento). Los experimentos se realizaron bajo condiciones de vídeo-vigilancia, un ejemplo paradigmático de contexto muy específico en el que los procesos de etiquetado son particularmente complicados. Las ideas claves de De- SVM se probaron en varios sub-problemas del reconocimiento de caras: la verificación de caras y reconocimiento de caras de conjunto cerrado y conjunto abierto. Los resultados muestran un comportamiento prometedor en términos de adquisición de conocimiento así como de robustez contra impostores. Además, este rendimiento es capaz de superar a otros métodos del estado del arte que no poseen esta capacidad de adaptación.[Abstract] In the last decade, deep learning has brought an unprecedented leap forward for computer vision general classification problems. One of the keys to this success is the availability of extensive and wealthy annotated datasets to use as training samples. In some sense, a deep learning network summarises this enormous amount of data into handy vector representations. For this reason, when the differences between training datasets and the data acquired during operation (due to factors such as the acquisition context) are highly marked, end-to-end deep learning methods are susceptible to suffer performance degradation. While the immediate solution to mitigate these problems is to resort to an additional data collection and its correspondent annotation procedure, this solution is far from optimal. The immeasurable possible variations of the visual world can convert the collection and annotation of data into an endless task. Even more when there are specific applications in which this additional action is difficult or simply not possible to perform due to, among other reasons, cost-related problems or privacy issues. This Thesis proposes to tackle all these problems from the adaptation point of view. Thus, the central hypothesis assumes that it is possible to use operational data with almost no supervision to improve the performance we would achieve with general-purpose recognition systems. To do so, and as a proof-of-concept, the field of study of this Thesis is restricted to face recognition, a paradigmatic application in which the context of acquisition can be especially relevant. This work begins by examining the intrinsic differences between some of the face recognition contexts and how they directly affect performance. To do it, we compare different datasets, and their contexts, against each other using some of the most advanced feature representations available to determine the actual need for adaptation. From this point, we move to present the novel method, representing the central contribution of the Thesis: the Dynamic Ensembles of SVM (De-SVM). This method implements the adaptation capabilities by performing unsupervised incremental learning using its own predictions as pseudo-labels for the update decision (the self-training strategy). Experiments are performed under video surveillance conditions, a paradigmatic example of a very specific context in which labelling processes are particularly complicated. The core ideas of De-SVM are tested in different face recognition sub-problems: face verification and, the more complex, general closed- and open-set face recognition. In terms of the achieved results, experiments have shown a promising behaviour in terms of both unsupervised knowledge acquisition and robustness against impostors, surpassing the performances achieved by state-of-the-art non-adaptive methods.Funding and Technical Resources For the successful development of this Thesis, it was necessary to rely on series of indispensable means included in the following list: • Working material, human and financial support primarily by the CITIC and the Computer Architecture Group of the University of A Coruña and CiTIUS of University of Santiago de Compostela, along with a PhD grant funded by Xunta the Galicia and the European Social Fund. • Access to bibliographical material through the library of the University of A Coruña. • Additional funding through the following research projects: State funding by the Ministry of Economy and Competitiveness of Spain (project TIN2017-90135-R MINECO, FEDER)

    Learning from imperfect data : incremental learning and Few-shot Learning

    Get PDF
    In recent years, artificial intelligence (AI) has achieved great success in many fields, e.g., computer vision, speech recognition, recommendation engines, and neural language processing. Although impressive advances have been made, AI algorithms still suffer from an important limitation: they rely on large-scale datasets. In contrast, human beings naturally possess the ability to learn novel knowledge from real-world and imperfect data such as a small number of samples or a non-static continual data stream. Attaining such an ability is particularly appealing. Specifically, an ideal AI system with human-level intelligence should work with the following imperfect data scenarios. 1)~The training data distribution changes while learning. In many real scenarios, data are streaming, might disappear after a given period of time, or even can not be stored at all due to storage constraints or privacy issues. As a consequence, the old knowledge is over-written, a phenomenon called catastrophic forgetting. 2)~The annotations of the training data are sparse. There are also many scenarios where we do not have access to the specific large-scale data of interest due to privacy and security reasons. As a consequence, the deep models overfit the training data distribution and are very likely to make wrong decisions when they encounter rare cases. Therefore, the goal of this thesis is to tackle the challenges and develop AI algorithms that can be trained with imperfect data. To achieve the above goal, we study three topics in this thesis. 1)~Learning with continual data without forgetting (i.e., incremental learning). 2)~Learning with limited data without overfitting (i.e., few-shot learning). 3)~Learning with imperfect data in real-world applications (e.g., incremental object detection). Our key idea is learning to learn/optimize. Specifically, we use advanced learning and optimization techniques to design data-driven methods to dynamically adapt the key elements in AI algorithms, e.g., selection of data, memory allocation, network architecture, essential hyperparameters, and control of knowledge transfer. We believe that the adaptive and dynamic design of system elements will significantly improve the capability of deep learning systems under limited data or continual streams, compared to the systems with fixed and non-optimized elements. More specifically, we first study how to overcome the catastrophic forgetting problem by learning to optimize exemplar data, allocate memory, aggregate neural networks, and optimize key hyperparameters. Then, we study how to improve the generalization ability of the model and tackle the overfitting problem by learning to transfer knowledge and ensemble deep models. Finally, we study how to apply incremental learning techniques to the recent top-performance transformer-based architecture for a more challenging and realistic vision, incremental object detection.Künstliche Intelligenz (KI) hat in den letzten Jahren in vielen Bereichen große Erfolge erzielt, z. B. Computer Vision, Spracherkennung, Empfehlungsmaschinen und neuronale Sprachverarbeitung. Obwohl beeindruckende Fortschritte erzielt wurden, leiden KI-Algorithmen immer noch an einer wichtigen Einschränkung: Sie sind auf umfangreiche Datensätze angewiesen. Im Gegensatz dazu besitzen Menschen von Natur aus die Fähigkeit, neuartiges Wissen aus realen und unvollkommenen Daten wie einer kleinen Anzahl von Proben oder einem nicht statischen kontinuierlichen Datenstrom zu lernen. Das Erlangen einer solchen Fähigkeit ist besonders reizvoll. Insbesondere sollte ein ideales KI-System mit Intelligenz auf menschlicher Ebene mit den folgenden unvollkommenen Datenszenarien arbeiten. 1)~Die Verteilung der Trainingsdaten ändert sich während des Lernens. In vielen realen Szenarien werden Daten gestreamt, können nach einer bestimmten Zeit verschwinden oder können aufgrund von Speicherbeschränkungen oder Datenschutzproblemen überhaupt nicht gespeichert werden. Infolgedessen wird das alte Wissen überschrieben, ein Phänomen, das als katastrophales Vergessen bezeichnet wird. 2)~Die Anmerkungen der Trainingsdaten sind spärlich. Es gibt auch viele Szenarien, in denen wir aus Datenschutz- und Sicherheitsgründen keinen Zugriff auf die spezifischen großen Daten haben, die von Interesse sind. Infolgedessen passen die tiefen Modelle zu stark an die Verteilung der Trainingsdaten an und treffen sehr wahrscheinlich falsche Entscheidungen, wenn sie auf seltene Fälle stoßen. Daher ist das Ziel dieser Arbeit, die Herausforderungen anzugehen und KI-Algorithmen zu entwickeln, die mit unvollkommenen Daten trainiert werden können. Um das obige Ziel zu erreichen, untersuchen wir in dieser Arbeit drei Themen. 1)~Lernen mit kontinuierlichen Daten ohne Vergessen (d. h. inkrementelles Lernen). 2) ~ Lernen mit begrenzten Daten ohne Überanpassung (d. h. Lernen mit wenigen Schüssen). 3) ~ Lernen mit unvollkommenen Daten in realen Anwendungen (z. B. inkrementelle Objekterkennung). Unser Leitgedanke ist Lernen lernen/optimieren. Insbesondere verwenden wir fortschrittliche Lern- und Optimierungstechniken, um datengesteuerte Methoden zu entwerfen, um die Schlüsselelemente in KI-Algorithmen dynamisch anzupassen, z. B. Auswahl von Daten, Speicherzuweisung, Netzwerkarchitektur, wesentliche Hyperparameter und Steuerung des Wissenstransfers. Wir glauben, dass das adaptive und dynamische Design von Systemelementen die Leistungsfähigkeit von Deep-Learning-Systemen bei begrenzten Daten oder kontinuierlichen Streams im Vergleich zu Systemen mit festen und nicht optimierten Elementen erheblich verbessern wird. Genauer gesagt untersuchen wir zunächst, wie das katastrophale Vergessensproblem überwunden werden kann, indem wir lernen, Beispieldaten zu optimieren, Speicher zuzuweisen, neuronale Netze zu aggregieren und wichtige Hyperparameter zu optimieren. Dann untersuchen wir, wie die Verallgemeinerungsfähigkeit des Modells verbessert und das Overfitting-Problem angegangen werden kann, indem wir lernen, Wissen zu übertragen und tiefe Modelle in Ensembles zusammenzufassen. Schließlich untersuchen wir, wie man inkrementelle Lerntechniken auf die jüngste transformatorbasierte Hochleistungsarchitektur für eine anspruchsvollere und realistischere Vision, inkrementelle Objekterkennung, anwendet

    Overcoming Catastrophic Forgetting by XAI

    Full text link
    Explaining the behaviors of deep neural networks, usually considered as black boxes, is critical especially when they are now being adopted over diverse aspects of human life. Taking the advantages of interpretable machine learning (interpretable ML), this work proposes a novel tool called Catastrophic Forgetting Dissector (or CFD) to explain catastrophic forgetting in continual learning settings. We also introduce a new method called Critical Freezing based on the observations of our tool. Experiments on ResNet articulate how catastrophic forgetting happens, particularly showing which components of this famous network are forgetting. Our new continual learning algorithm defeats various recent techniques by a significant margin, proving the capability of the investigation. Critical freezing not only attacks catastrophic forgetting but also exposes explainability.Comment: Master of Science Thesis at KAIST; 24 pages; Keywords: continual learning, catastrophic forgetting, XAI, attribution map, interpretabilit
    corecore