809 research outputs found

    Improving Unsupervised Defect Segmentation by Applying Structural Similarity to Autoencoders

    Full text link
    Convolutional autoencoders have emerged as popular methods for unsupervised defect segmentation on image data. Most commonly, this task is performed by thresholding a pixel-wise reconstruction error based on an â„“p\ell^p distance. This procedure, however, leads to large residuals whenever the reconstruction encompasses slight localization inaccuracies around edges. It also fails to reveal defective regions that have been visually altered when intensity values stay roughly consistent. We show that these problems prevent these approaches from being applied to complex real-world scenarios and that it cannot be easily avoided by employing more elaborate architectures such as variational or feature matching autoencoders. We propose to use a perceptual loss function based on structural similarity which examines inter-dependencies between local image regions, taking into account luminance, contrast and structural information, instead of simply comparing single pixel values. It achieves significant performance gains on a challenging real-world dataset of nanofibrous materials and a novel dataset of two woven fabrics over the state of the art approaches for unsupervised defect segmentation that use pixel-wise reconstruction error metrics

    Learning Representations for Novelty and Anomaly Detection

    Get PDF
    The problem of novelty or anomaly detection refers to the ability to automatically identify data samples that differ from a notion of normality. Techniques that address this problem are necessary in many applications, like in medical diagnosis, autonomous driving, fraud detection, or cyber-attack detection, just to mention a few. The problem is inherently challenging because of the openness of the space of distributions that characterize novelty or outlier data points. This is often matched with the inability to adequately represent such distributions due to the lack of representative data. In this dissertation we address the challenge above by making several contributions. (a)We introduce an unsupervised framework for novelty detection, which is based on deep learning techniques, and which does not require labeled data representing the distribution of outliers. (b) The framework is general and based on first principles by detecting anomalies via computing their probabilities according to the distribution representing normality. (c) The framework can handle high-dimensional data such as images, by performing a non-linear dimensionality reduction of the input space into an isometric lower-dimensional space, leading to a computationally efficient method. (d) The framework is guarded from the potential inclusion of distributions of outliers into the distribution of normality by favoring that only inlier data can be well represented by the model. (e) The methods are evaluated extensively on multiple computer vision benchmark datasets, where it is shown that they compare favorably with the state of the art

    Representation Learning with Adversarial Latent Autoencoders

    Get PDF
    A large number of deep learning methods applied to computer vision problems require encoder-decoder maps. These methods include, but are not limited to, self-representation learning, generalization, few-shot learning, and novelty detection. Encoder-decoder maps are also useful for photo manipulation, photo editing, superresolution, etc. Encoder-decoder maps are typically learned using autoencoder networks.Traditionally, autoencoder reciprocity is achieved in the image-space using pixel-wisesimilarity loss, which has a widely known flaw of producing non-realistic reconstructions. This flaw is typical for the Variational Autoencoder (VAE) family and is not only limited to pixel-wise similarity losses, but is common to all methods relying upon the explicit maximum likelihood training paradigm, as opposed to an implicit one. Likelihood maximization, coupled with poor decoder distribution leads to poor or blurry reconstructions at best. Generative Adversarial Networks (GANs) on the other hand, perform an implicit maximization of the likelihood by solving a minimax game, thus bypassing the issues derived from the explicit maximization. This provides GAN architectures with remarkable generative power, enabling the generation of high-resolution images of humans, which are indistinguishable from real photos to the naked eye. However, GAN architectures lack inference capabilities, which makes them unsuitable for training encoder-decoder maps, effectively limiting their application space.We introduce an autoencoder architecture that (a) is free from the consequences ofmaximizing the likelihood directly, (b) produces reconstructions competitive in quality with state-of-the-art GAN architectures, and (c) allows learning disentangled representations, which makes it useful in a variety of problems. We show that the proposed architecture and training paradigm significantly improves the state-of-the-art in novelty and anomaly detection methods, it enables novel kinds of image manipulations, and has significant potential for other applications
    • …
    corecore