8 research outputs found

    Active Generative Adversarial Network for Image Classification

    Full text link
    Sufficient supervised information is crucial for any machine learning models to boost performance. However, labeling data is expensive and sometimes difficult to obtain. Active learning is an approach to acquire annotations for data from a human oracle by selecting informative samples with a high probability to enhance performance. In recent emerging studies, a generative adversarial network (GAN) has been integrated with active learning to generate good candidates to be presented to the oracle. In this paper, we propose a novel model that is able to obtain labels for data in a cheaper manner without the need to query an oracle. In the model, a novel reward for each sample is devised to measure the degree of uncertainty, which is obtained from a classifier trained with existing labeled data. This reward is used to guide a conditional GAN to generate informative samples with a higher probability for a certain label. With extensive evaluations, we have confirmed the effectiveness of the model, showing that the generated samples are capable of improving the classification performance in popular image classification tasks.Comment: AAAI201

    GAN you train your network

    Get PDF
    2022 Summer.Includes bibliographical references.Zero-shot classifiers identify unseen classes — classes not seen during training. Specifically, zero-shot models classify attribute information associated with classes (e.g., a zebra has stripes but a lion does not). Lately, the usage of generative adversarial networks (GAN) for zero-shot learning has significantly improved the recognition accuracy of unseen classes by producing visual features on any class. Here, I investigate how similar visual features obtained from images of a class are to the visual features generated by a GAN. I find that, regardless of metric, both sets of visual features are disjointed. I also fine-tune a ResNet so that it produces visual features that are similar to the visual features generated by a GAN — this is novel because all standard approaches do the opposite: they train the GAN to match the output of the model. I conclude that these experiments emphasize the need to establish a standard input pipeline in zero-shot learning because of the mismatch of generated and real features, as well as the variation in features (and subsequent GAN performance) from different implementations of models such as ResNet-101

    Improving Generalization via Attribute Selection on Out-of-the-Box Data.

    Full text link
    Zero-shot learning (ZSL) aims to recognize unseen objects (test classes) given some other seen objects (training classes) by sharing information of attributes between different objects. Attributes are artificially annotated for objects and treated equally in recent ZSL tasks. However, some inferior attributes with poor predictability or poor discriminability may have negative impacts on the ZSL system performance. This letter first derives a generalization error bound for ZSL tasks. Our theoretical analysis verifies that selecting the subset of key attributes can improve the generalization performance of the original ZSL model, which uses all the attributes. Unfortunately, previous attribute selection methods have been conducted based on the seen data, and their selected attributes have poor generalization capability to the unseen data, which is unavailable in the training stage of ZSL tasks. Inspired by learning from pseudo-relevance feedback, this letter introduces out-of-the-box data-pseudo-data generated by an attribute-guided generative model-to mimic the unseen data. We then present an iterative attribute selection (IAS) strategy that iteratively selects key attributes based on the out-of-the-box data. Since the distribution of the generated out-of-the-box data is similar to that of the test data, the key attributes selected by IAS can be effectively generalized to test data. Extensive experiments demonstrate that IAS can significantly improve existing attribute-based ZSL methods and achieve state-of-the-art performance

    Adversarial Zero-shot Learning With Semantic Augmentation

    No full text
    In situations in which labels are expensive or difficult to obtain, deep neural networks for object recognition often suffer to achieve fair performance. Zero-shot learning is dedicated to this problem. It aims to recognize objects of unseen classes by transferring knowledge from seen classes via a shared intermediate representation. Using the manifold structure of seen training samples is widely regarded as important to learn a robust mapping between samples and the intermediate representation, which is crucial for transferring the knowledge. However, their irregular structures, such as the lack in variation of samples for certain classes and highly overlapping clusters of different classes, may result in an inappropriate mapping. Additionally, in a high dimensional mapping space, the hubness problem may arise, in which one of the unseen classes has a high possibility to be assigned to samples of different classes. To mitigate such problems, we use a generative adversarial network to synthesize samples with specified semantics to cover a higher diversity of given classes and interpolated semantics of pairs of classes. We propose a simple yet effective method for applying the augmented semantics to the hinge loss functions to learn a robust mapping. The proposed method was extensively evaluated on small- and large-scale datasets, showing a significant improvement over state-of-the-art methods
    corecore