493 research outputs found

    Principled deep learning approaches for learning from limited labeled data through distribution matching

    Get PDF
    Les réseaux de neurones profonds ont démontré un fort impact dans de nombreuses applications du monde réel et ont atteint des performances prometteuses dans plusieurs domaines de recherche. Cependant, ces gains empiriques sont généralement difficiles à déployer dans les scénarios du monde réel, car ils nécessitent des données étiquetées massives. Pour des raisons de temps et de budget, la collecte d'un tel ensemble de données d'entraînement à grande échelle est irréaliste. Dans cette thèse, l'objectif est d'utiliser le distribution matching pour développer de nouvelles approches d'apprentissage profond pour la prédiction de peu de données étiquetées. En particulier, nous nous concentrons sur les problèmes d'apprentissage multi-tâches, d'apprentissage actif et d'adaptation au domaine, qui sont les scénarios typiques de l'apprentissage à partir de données étiquetées limitées. La première contribution consiste à développer l'approche principale de l'apprentissage multi-tâches. Concrètement, on propose un point de vue théorique pour comprendre le rôle de la similarité entre les tâches. Basé sur les résultats théoriques, nous re-examinons l'algorithme du Adversarial Multi-Task Neural Network, et proposons un algorithme itératif pour estimer le coefficient des relations entre les tâches et les paramètres du réseaux de neurones. La deuxième contribution consiste à proposer une méthode unifiée pour les requêtes et les entraînements dans l'apprentissage actif profond par lots. Concrètement, nous modélisons la procédure interactive de l'apprentissage actif comme le distribution matching. Nous avons ensuite dérivé une nouvelle perte d'entraînement, qui se décompose en deux parties : l'optimisation des paramètres du réseaux de neurones et la sélection des requêtes par lots. En outre, la perte d'entraînement du réseau profond est formulée comme un problème d'optimisation min-max en utilisant les informations des données non étiquetées. La sélection de lots de requêtes proposée indique également un compromis explicite entre incertitude et diversité. La troisième contribution vise à montrer l'incohérence entre le domain adversarial training et sa correspondance théorique supposée, basée sur la H-divergence. Concrètement, nous découvrons que la H-divergence n'est pas équivalente à la divergence de Jensen-Shannon, l'objectif d'optimisation dans les entraînements adversaires de domaine. Pour cela, nous établissons un nouveau modèle théorique en prouvant explicitement les bornes supérieures et inférieures du risque de la cible, basées sur la divergence de Jensen-Shannon. Notre framework présente des flexibilités inhérentes pour différents problèmes d'apprentissage par transfert. D'un point de vue algorithmique, notre théorie fournit une guidance de l'alignement conditionnel sémantique, de l'alignement de la distribution marginale et de la correction du label-shift marginal. La quatrième contribution consiste à développer de nouvelles approches pour agréger des domaines de sources avec des distributions d'étiquettes différentes, où la plupart des approches récentes de sélection de sources échouent. L'algorithme que nous proposons diffère des approches précédentes sur deux points essentiels : le modèle agrège plusieurs sources principalement par la similarité de la distribution conditionnelle plutôt que par la distribution marginale ; le modèle propose un cadre unifié pour sélectionner les sources pertinentes pour trois scénarios populaires, l'adaptation de domaine avec une étiquette limitée sur le domaine cible, l'adaptation de domaine non supervisée et l'adaptation de domaine non supervisée partielle par étiquette.Deep neural networks have demonstrated a strong impact on a wide range of tasks and achieved promising performances. However, these empirical gains are generally difficult to deploy in real-world scenarios, because they require large-scale hand-labeled datasets. Due to the time and cost budget, collecting such large-scale training sets is usually infeasible in practice. In this thesis, we develop novel approaches through distribution matching to learn limited labeled data. Specifically, we focus on the problems of multi-task learning, active learning, and domain adaptation, which are the typical scenarios in learning from limited labeled data. The first contribution is to develop a principled approach in multi-task learning. Specifically, we propose a theoretical viewpoint to understand the importance of task similarity in multi-task learning. Then we revisit the adversarial multi-task neural network and propose an iterative algorithm to estimate the task relation coefficient and neural-network parameters. The second contribution is to propose a unified and principled method for both querying and training in deep batch active learning. We model the interactive procedure as distribution matching. Then we derive a new principled approach in optimizing neural network parameters and batch query selection. The loss for neural network training is formulated as a min-max optimization through leveraging the unlabeled data. The query loss indicates an explicit uncertainty-diversity trade-off batch-selection. The third contribution aims at revealing the incoherence between the widely-adopted empirical domain adversarial training and its generally assumed theoretical counterpart based on H-divergence. Concretely, we find that H-divergence is not equivalent to Jensen-Shannon divergence, the optimization objective in domain adversarial training. To this end, we establish a new theoretical framework by directly proving the upper and lower target risk bounds based on the Jensen-Shannon divergence. Our framework exhibits flexibilities for different transfer learning problems. Besides, our theory enables a unified guideline in conditional matching, feature marginal matching, and label marginal shift correction. The fourth contribution is to design novel approaches for aggregating source domains with different label distributions, where most existing source selection approaches fail. Our proposed algorithm differs from previous approaches in two key ways: the model aggregates multiple sources mainly through the similarity of conditional distribution rather than marginal distribution; the model proposes a unified framework to select relevant sources for three popular scenarios, i.e., domain adaptation with limited label on the target domain, unsupervised domain adaptation and labels partial unsupervised domain adaption

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Multi-task Learning by Leveraging the Semantic Information

    Full text link
    One crucial objective of multi-task learning is to align distributions across tasks so that the information between them can be transferred and shared. However, existing approaches only focused on matching the marginal feature distribution while ignoring the semantic information, which may hinder the learning performance. To address this issue, we propose to leverage the label information in multi-task learning by exploring the semantic conditional relations among tasks. We first theoretically analyze the generalization bound of multi-task learning based on the notion of Jensen-Shannon divergence, which provides new insights into the value of label information in multi-task learning. Our analysis also leads to a concrete algorithm that jointly matches the semantic distribution and controls label distribution divergence. To confirm the effectiveness of the proposed method, we first compare the algorithm with several baselines on some benchmarks and then test the algorithms under label space shift conditions. Empirical results demonstrate that the proposed method could outperform most baselines and achieve state-of-the-art performance, particularly showing the benefits under the label shift conditions

    Generative adversarial networks review in earthquake-related engineering fields

    Get PDF
    Within seismology, geology, civil and structural engineering, deep learning (DL), especially via generative adversarial networks (GANs), represents an innovative, engaging, and advantageous way to generate reliable synthetic data that represent actual samples' characteristics, providing a handy data augmentation tool. Indeed, in many practical applications, obtaining a significant number of high-quality information is demanding. Data augmentation is generally based on artificial intelligence (AI) and machine learning data-driven models. The DL GAN-based data augmentation approach for generating synthetic seismic signals revolutionized the current data augmentation paradigm. This study delivers a critical state-of-art review, explaining recent research into AI-based GAN synthetic generation of ground motion signals or seismic events, and also with a comprehensive insight into seismic-related geophysical studies. This study may be relevant, especially for the earth and planetary science, geology and seismology, oil and gas exploration, and on the other hand for assessing the seismic response of buildings and infrastructures, seismic detection tasks, and general structural and civil engineering applications. Furthermore, highlighting the strengths and limitations of the current studies on adversarial learning applied to seismology may help to guide research efforts in the next future toward the most promising directions

    Learning-Based Control Strategies for Soft Robots: Theory, Achievements, and Future Challenges

    Get PDF
    In the last few decades, soft robotics technologies have challenged conventional approaches by introducing new, compliant bodies to the world of rigid robots. These technologies and systems may enable a wide range of applications, including human-robot interaction and dealing with complex environments. Soft bodies can adapt their shape to contact surfaces, distribute stress over a larger area, and increase the contact surface area, thus reducing impact forces
    • …
    corecore