70 research outputs found

    Adversarial robustness of VAEs through the lens of local geometry

    Full text link
    In an unsupervised attack on variational autoencoders (VAEs), an adversary finds a small perturbation in an input sample that significantly changes its latent space encoding, thereby compromising the reconstruction for a fixed decoder. A known reason for such vulnerability is the distortions in the latent space resulting from a mismatch between approximated latent posterior and a prior distribution. Consequently, a slight change in an input sample can move its encoding to a low/zero density region in the latent space resulting in an unconstrained generation. This paper demonstrates that an optimal way for an adversary to attack VAEs is to exploit a directional bias of a stochastic pullback metric tensor induced by the encoder and decoder networks. The pullback metric tensor of an encoder measures the change in infinitesimal latent volume from an input to a latent space. Thus, it can be viewed as a lens to analyse the effect of input perturbations leading to latent space distortions. We propose robustness evaluation scores using the eigenspectrum of a pullback metric tensor. Moreover, we empirically show that the scores correlate with the robustness parameter β\beta of the β\beta-VAE. Since increasing β\beta also degrades reconstruction quality, we demonstrate a simple alternative using \textit{mixup} training to fill the empty regions in the latent space, thus improving robustness with improved reconstruction.Comment: International Conference on Artificial Intelligence and Statistics (AISTATS) 202

    Alleviating Adversarial Attacks on Variational Autoencoders with MCMC

    Get PDF
    Variational autoencoders (VAEs) are latent variable models that can generate complex objects and provide meaningful latent representations. Moreover, they could be further used in downstream tasks such as classification. As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input. Here, we examine several objective functions for adversarial attack construction proposed previously and present a solution to alleviate the effect of these attacks. Our method utilizes the Markov Chain Monte Carlo (MCMC) technique in the inference step that we motivate with a theoretical analysis. Thus, we do not incorporate any extra costs during training, and the performance on non-attacked inputs is not decreased. We validate our approach on a variety of datasets (MNIST, Fashion MNIST, Color MNIST, CelebA) and VAE configurations (β\beta-VAE, NVAE, β\beta-TCVAE), and show that our approach consistently improves the model robustness to adversarial attacks

    Survey of deep representation learning for speech emotion recognition

    Get PDF
    Traditionally, speech emotion recognition (SER) research has relied on manually handcrafted acoustic features using feature engineering. However, the design of handcrafted features for complex SER tasks requires significant manual eort, which impedes generalisability and slows the pace of innovation. This has motivated the adoption of representation learning techniques that can automatically learn an intermediate representation of the input signal without any manual feature engineering. Representation learning has led to improved SER performance and enabled rapid innovation. Its effectiveness has further increased with advances in deep learning (DL), which has facilitated \textit{deep representation learning} where hierarchical representations are automatically learned in a data-driven manner. This paper presents the first comprehensive survey on the important topic of deep representation learning for SER. We highlight various techniques, related challenges and identify important future areas of research. Our survey bridges the gap in the literature since existing surveys either focus on SER with hand-engineered features or representation learning in the general setting without focusing on SER

    Selected Inductive Biases in Neural Networks To Generalize Beyond the Training Domain

    Get PDF
    Die künstlichen neuronalen Netze des computergesteuerten Sehens können mit den vielf\"altigen Fähigkeiten des menschlichen Sehens noch lange nicht mithalten. Im Gegensatz zum Menschen können künstliche neuronale Netze durch kaum wahrnehmbare Störungen durcheinandergebracht werden, es mangelt ihnen an Generalisierungsfähigkeiten über ihre Trainingsdaten hinaus und sie benötigen meist noch enorme Datenmengen für das Erlernen neuer Aufgaben. Somit sind auf neuronalen Netzen basierende Anwendungen häufig auf kleine Bereiche oder kontrollierte Umgebungen beschränkt und lassen sich schlecht auf andere Aufgaben übertragen. In dieser Dissertation, werden vier Veröffentlichungen besprochen, die sich mit diesen Einschränkungen auseinandersetzen und Algorithmen im Bereich des visuellen Repräsentationslernens weiterentwickeln. In der ersten Veröffentlichung befassen wir uns mit dem Erlernen der unabhängigen Faktoren, die zum Beispiel eine Szenerie beschreiben. Im Gegensatz zu vorherigen Arbeiten in diesem Forschungsfeld verwenden wir hierbei jedoch weniger künstliche, sondern natürlichere Datensätze. Dabei beobachten wir, dass die zeitlichen Änderungen von Szenerien beschreibenden, natürlichen Faktoren (z.B. die Positionen von Personen in einer Fußgängerzone) einer verallgemeinerten Laplace-Verteilung folgen. Wir nutzen die verallgemeinerte Laplace-Verteilung als schwaches Lernsignal, um neuronale Netze für mathematisch beweisbares Repräsentationslernen unabhängiger Faktoren zu trainieren. Wir erzielen in den disentanglement_lib Wettbewerbsdatensätzen vergleichbare oder bessere Ergebnisse als vorherige Arbeiten – dies gilt auch für die von uns beigesteuerten Datensätze, welche natürliche Faktoren beinhalten. Die zweite Veröffentlichung untersucht, ob verschiedene neuronale Netze bereits beobachtete, eine Szenerie beschreibende Faktoren generalisieren können. In den meisten bisherigen Generalisierungswettbewerben werden erst während der Testphase neue Störungsfaktoren hinzugefügt - wir hingegen garantieren, dass die für die Testphase relevanten Variationsfaktoren bereits während der Trainingsphase teilweise vorkommen. Wir stellen fest, dass die getesteten neuronalen Netze meist Schwierigkeiten haben, die beschreibenden Faktoren zu generalisieren. Anstatt die richtigen Werte der Faktoren zu bestimmen, neigen die Netze dazu, Werte in zuvor beobachteten Bereichen vorherzusagen. Dieses Verhalten ist bei allen untersuchten neuronalen Netzen recht ähnlich. Trotz ihrer begrenzten Generalisierungsfähigkeiten, können die Modelle jedoch modular sein: Obwohl sich einige Faktoren während der Trainingsphase in einem zuvor ungesehenen Wertebereich befinden, können andere Faktoren aus einem bereits bekannten Wertebereich größtenteils dennoch korrekt bestimmt werden. Die dritte Veröffentlichung präsentiert ein adversielles Trainingsverfahren für neuronale Netze. Das Verfahren ist inspiriert durch lokale Korrelationsstrukturen häufiger Bildartefakte, die z.B. durch Regen, Unschärfe oder Rauschen entstehen können. Im Klassifizierungswettbewerb ImageNet-C zeigen wir, dass mit unserer Methode trainierte Netzwerke weniger anfällig für häufige Störungen sind als einige, die mit bestehenden Methoden trainiert wurden. Schließlich stellt die vierte Veröffentlichung einen generativen Ansatz vor, der bestehende Ansätze gemäß mehrerer Robustheitsmetriken beim MNIST Ziffernklassifizierungswettbewerb übertrifft. Perzeptiv scheint unser generatives Modell im Vergleich zu früheren Ansätzen stärker auf das menschliche Sehen abgestimmt zu sein, da Bilder von Ziffern, die für unser generatives Modell mehrdeutig sind, auch für den Menschen mehrdeutig erscheinen können. Diese Arbeit liefert also Möglichkeiten zur Verbesserung der adversiellen Robustheit und der Störungstoleranz sowie Erweiterungen im Bereich des visuellen Repräsentationslernens. Somit nähern wir uns im Bereich des maschinellen Lernens weiter der Vielfalt menschlicher Fähigkeiten an.Artificial neural networks in computer vision have yet to approach the broad performance of human vision. Unlike humans, artificial networks can be derailed by almost imperceptible perturbations, lack strong generalization capabilities beyond the training data and still mostly require enormous amounts of data to learn novel tasks. Thus, current applications based on neural networks are often limited to a narrow range of controlled environments and do not transfer well across tasks. This thesis presents four publications that address these limitations and advance visual representation learning algorithms. In the first publication, we aim to push the field of disentangled representation learning towards more realistic settings. We observe that natural factors of variation describing scenes, e.g., the position of pedestrians, have temporally sparse transitions in videos. We leverage this sparseness as a weak form of learning signal to train neural networks for provable disentangled visual representation learning. We achieve competitive results on the disentanglement_lib benchmark datasets and our own contributed datasets, which include natural transitions. The second publication investigates whether various visual representation learning approaches generalize along partially observed factors of variation. In contrast to prior robustness benchmarks that add unseen types of perturbations during test time, we compose, interpolate, or extrapolate the factors observed during training. We find that the tested models mostly struggle to generalize to our proposed benchmark. Instead of predicting the correct factors, models tend to predict values in previously observed ranges. This behavior is quite common across models. Despite their limited out-of-distribution performances, the models can be fairly modular as, even though some factors are out-of-distribution, other in-distribution factors are still mostly inferred correctly. The third publication presents an adversarial noise training method for neural networks inspired by the local correlation structure of common corruptions caused by rain, blur, or noise. On the ImageNet-C classification benchmark, we show that networks trained with our method are less susceptible to common corruptions than those trained with existing methods. Finally, the fourth publication introduces a generative approach that outperforms existing approaches according to multiple robustness metrics on the MNIST digit classification benchmark. Perceptually, our generative model is more aligned with human vision compared to previous approaches, as images of digits at our model's decision boundary can also appear ambiguous to humans. In a nutshell, this work investigates ways of improving adversarial and corruption robustness, and disentanglement in visual representation learning algorithms. Thus, we alleviate some limitations in machine learning and narrow the gap towards human capabilities

    Generating Semantic Adversarial Examples via Feature Manipulation

    Full text link
    The vulnerability of deep neural networks to adversarial attacks has been widely demonstrated (e.g., adversarial example attacks). Traditional attacks perform unstructured pixel-wise perturbation to fool the classifier. An alternative approach is to have perturbations in the latent space. However, such perturbations are hard to control due to the lack of interpretability and disentanglement. In this paper, we propose a more practical adversarial attack by designing structured perturbation with semantic meanings. Our proposed technique manipulates the semantic attributes of images via the disentangled latent codes. The intuition behind our technique is that images in similar domains have some commonly shared but theme-independent semantic attributes, e.g. thickness of lines in handwritten digits, that can be bidirectionally mapped to disentangled latent codes. We generate adversarial perturbation by manipulating a single or a combination of these latent codes and propose two unsupervised semantic manipulation approaches: vector-based disentangled representation and feature map-based disentangled representation, in terms of the complexity of the latent codes and smoothness of the reconstructed images. We conduct extensive experimental evaluations on real-world image data to demonstrate the power of our attacks for black-box classifiers. We further demonstrate the existence of a universal, image-agnostic semantic adversarial example.Comment: arXiv admin note: substantial text overlap with arXiv:1705.09064 by other author

    Improving Model Generalization by On-manifold Adversarial Augmentation in the Frequency Domain

    Full text link
    Deep neural networks (DNNs) may suffer from significantly degenerated performance when the training and test data are of different underlying distributions. Despite the importance of model generalization to out-of-distribution (OOD) data, the accuracy of state-of-the-art (SOTA) models on OOD data can plummet. Recent work has demonstrated that regular or off-manifold adversarial examples, as a special case of data augmentation, can be used to improve OOD generalization. Inspired by this, we theoretically prove that on-manifold adversarial examples can better benefit OOD generalization. Nevertheless, it is nontrivial to generate on-manifold adversarial examples because the real manifold is generally complex. To address this issue, we proposed a novel method of Augmenting data with Adversarial examples via a Wavelet module (AdvWavAug), an on-manifold adversarial data augmentation technique that is simple to implement. In particular, we project a benign image into a wavelet domain. With the assistance of the sparsity characteristic of wavelet transformation, we can modify an image on the estimated data manifold. We conduct adversarial augmentation based on AdvProp training framework. Extensive experiments on different models and different datasets, including ImageNet and its distorted versions, demonstrate that our method can improve model generalization, especially on OOD data. By integrating AdvWavAug into the training process, we have achieved SOTA results on some recent transformer-based models.Comment: Computer Vision and Image Understanding (CVIU) [under review

    Capturing Label Characteristics in VAEs

    Full text link
    We present a principled approach to incorporating labels in VAEs that captures the rich characteristic information associated with those labels. While prior work has typically conflated these by learning latent variables that directly correspond to label values, we argue this is contrary to the intended effect of supervision in VAEs-capturing rich label characteristics with the latents. For example, we may want to capture the characteristics of a face that make it look young, rather than just the age of the person. To this end, we develop the CCVAE, a novel VAE model and concomitant variational objective which captures label characteristics explicitly in the latent space, eschewing direct correspondences between label values and latents. Through judicious structuring of mappings between such characteristic latents and labels, we show that the CCVAE can effectively learn meaningful representations of the characteristics of interest across a variety of supervision schemes. In particular, we show that the CCVAE allows for more effective and more general interventions to be performed, such as smooth traversals within the characteristics for a given label, diverse conditional generation, and transferring characteristics across datapoints.Comment: Accepted to ICLR 202
    corecore