1,034 research outputs found

    Input and Weight Space Smoothing for Semi-supervised Learning

    Full text link
    We propose regularizing the empirical loss for semi-supervised learning by acting on both the input (data) space, and the weight (parameter) space. We show that the two are not equivalent, and in fact are complementary, one affecting the minimality of the resulting representation, the other insensitivity to nuisance variability. We propose a method to perform such smoothing, which combines known input-space smoothing with a novel weight-space smoothing, based on a min-max (adversarial) optimization. The resulting Adversarial Block Coordinate Descent (ABCD) algorithm performs gradient ascent with a small learning rate for a random subset of the weights, and standard gradient descent on the remaining weights in the same mini-batch. It achieves comparable performance to the state-of-the-art without resorting to heavy data augmentation, using a relatively simple architecture

    Why Clean Generalization and Robust Overfitting Both Happen in Adversarial Training

    Full text link
    Adversarial training is a standard method to train deep neural networks to be robust to adversarial perturbation. Similar to surprising clean generalization\textit{clean generalization} ability in the standard deep learning setting, neural networks trained by adversarial training also generalize well for unseen clean data\textit{unseen clean data}. However, in constrast with clean generalization, while adversarial training method is able to achieve low robust training error\textit{robust training error}, there still exists a significant robust generalization gap\textit{robust generalization gap}, which promotes us exploring what mechanism leads to both clean generalization and robust overfitting (CGRO)\textit{clean generalization and robust overfitting (CGRO)} during learning process. In this paper, we provide a theoretical understanding of this CGRO phenomenon in adversarial training. First, we propose a theoretical framework of adversarial training, where we analyze feature learning process\textit{feature learning process} to explain how adversarial training leads network learner to CGRO regime. Specifically, we prove that, under our patch-structured dataset, the CNN model provably partially learns the true feature but exactly memorizes the spurious features from training-adversarial examples, which thus results in clean generalization and robust overfitting. For more general data assumption, we then show the efficiency of CGRO classifier from the perspective of representation complexity\textit{representation complexity}. On the empirical side, to verify our theoretical analysis in real-world vision dataset, we investigate the dynamics of loss landscape\textit{dynamics of loss landscape} during training. Moreover, inspired by our experiments, we prove a robust generalization bound based on global flatness\textit{global flatness} of loss landscape, which may be an independent interest.Comment: 27 pages, comments welcom

    Identifying electrons with deep learning methods

    Full text link
    Cette thèse porte sur les techniques de l’apprentissage machine et leur application à un problème important de la physique des particules expérimentale: l’identification des électrons de signal résultant des collisions proton-proton au Grand collisionneur de hadrons. Au chapitre 1, nous fournissons des informations sur le Grand collisionneur de hadrons et expliquons pourquoi il a été construit. Nous présentons ensuite plus de détails sur ATLAS, l’un des plus importants détecteurs du Grand collisionneur de hadrons. Ensuite, nous expliquons en quoi consiste la tâche d’identification des électrons ainsi que l’importance de bien la mener à terme. Enfin, nous présentons des informations détaillées sur l’ensemble de données que nous utilisons pour résoudre cette tâche d’identification des électrons. Au chapitre 2, nous donnons une brève introduction des principes fondamentaux de l’apprentissage machine. Après avoir défini et introduit les différents types de tâche d’apprentissage, nous discutons des diverses façons de représenter les données d’entrée. Ensuite, nous présentons ce qu’il faut apprendre de ces données et comment y parvenir. Enfin, nous examinons les problèmes qui pourraient se présenter en régime de “sur-apprentissage”. Au chapitres 3, nous motivons le choix de l’architecture choisie pour résoudre notre tâche, en particulier pour les sections où des images séquentielles sont utilisées comme entrées. Nous présentons ensuite les résultats de nos expériences et montrons que notre modèle fonctionne beaucoup mieux que les algorithmes présentement utilisés par la collaboration ATLAS. Enfin, nous discutons des futures orientations afin d’améliorer davantage nos résultats. Au chapitre 4, nous abordons les deux concepts que sont la généralisation hors distribution et la planéité de la surface associée à la fonction de coût. Nous prétendons que les algorithmes qui font converger la fonction coût vers minimum couvrant une région large et plate sont également ceux qui offrent le plus grand potentiel de généralisation pour les tâches hors distribution. Nous présentons les résultats de l’application de ces deux algorithmes à notre ensemble de données et montrons que cela soutient cette affirmation. Nous terminons avec nos conclusions.This thesis is about applying the tools of Machine Learning to an important problem of experimental particle physics: identifying signal electrons after proton-proton collisions at the Large Hadron Collider. In Chapters 1, we provide some information about the Large Hadron Collider and explain why it was built. We give further details about one of the biggest detectors in the Large Hadron Collider, the ATLAS. Then we define what electron identification task is, as well as the importance of solving it. Finally, we give detailed information about our dataset that we use to solve the electron identification task. In Chapters 2, we give a brief introduction to fundamental principles of machine learning. Starting with the definition and types of different learning tasks, we discuss various ways to represent inputs. Then we present what to learn from the inputs as well as how to do it. And finally, we look at the problems that would arise if we “overdo” learning. In Chapters 3, we motivate the choice of the architecture to solve our task, especially for the parts that have sequential images as inputs. We then present the results of our experiments and show that our model performs much better than the existing algorithms that the ATLAS collaboration currently uses. Finally, we discuss future directions to further improve our results. In Chapter 4, we discuss two concepts: out of distribution generalization and flatness of loss surface. We claim that the algorithms, that brings a model into a wide flat minimum of its training loss surface, would generalize better for out of distribution tasks. We give the results of implementing two such algorithms to our dataset and show that it supports our claim. Finally, we end with our conclusions

    Musings on Deep Learning: Properties of SGD

    Get PDF
    [previously titled "Theory of Deep Learning III: Generalization Properties of SGD"] In Theory III we characterize with a mix of theory and experiments the generalization properties of Stochastic Gradient Descent in overparametrized deep convolutional networks. We show that Stochastic Gradient Descent (SGD) selects with high probability solutions that 1) have zero (or small) empirical error, 2) are degenerate as shown in Theory II and 3) have maximum generalization.This work was supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF - 1231216. H.M. is supported in part by ARO Grant W911NF-15-1- 0385
    • …
    corecore