22,294 research outputs found

    Training from a Better Start Point: Active Self-Semi-Supervised Learning for Few Labeled Samples

    Full text link
    Training with fewer annotations is a key issue for applying deep models to various practical domains. To date, semi-supervised learning has achieved great success in training with few annotations. However, confirmation bias increases dramatically as the number of annotations decreases making it difficult to continue reducing the number of annotations. Based on the observation that the quality of pseudo-labels early in semi-supervised training plays an important role in mitigating confirmation bias, in this paper we propose an active self-semi-supervised learning (AS3L) framework. AS3L bootstraps semi-supervised models with prior pseudo-labels (PPL), where PPL is obtained by label propagation over self-supervised features. We illustrate that the accuracy of PPL is not only affected by the quality of features, but also by the selection of the labeled samples. We develop active learning and label propagation strategies to obtain better PPL. Consequently, our framework can significantly improve the performance of models in the case of few annotations while reducing the training time. Experiments on four semi-supervised learning benchmarks demonstrate the effectiveness of the proposed methods. Our method outperforms the baseline method by an average of 7\% on the four datasets and outperforms the baseline method in accuracy while taking about 1/3 of the training time.Comment: 12 pages, 8 figure

    Energy consumption modelling using deep learning embedded semi-supervised learning

    Get PDF
    Reduction of energy consumption in the steel industry is a global issue where government is actively taking measures to pursue. A steel plant can manage its energy better if the consumption can be modelled and predicted. The existing methods used for energy consumption modelling rely on the quantity of labelled data. However, if the labelled energy consumption data is deficient, its underlying process of modelling and prediction tends to be difficult. The purpose of this study is to establish an energy value prediction model through a big data-driven approach. Owing to the fact that labelled energy data is often limited and expensive to obtain, while unlabelled data is abundant in the real-world industry, a semi-supervised learning approach, i.e., deep learning embedded semi-supervised learning (DLeSSL), is proposed to tackle the issue. Based on DLeSSL, unlabelled data can be labelled and compensated using a semi-supervised learning approach that has a deep learning technique embedded so to expand the labelled data set. An experimental study using a large amount of furnace energy consumption data shows the merits of the proposed approach. Results derived using the proposed method reveal that deep learning (DLeSSL based) outperforms the deep learning (supervised) and deep learning (label propagation based) when the labelled data is limited. In addition, the effect on performance due to the size of labelled data and unlabelled data is also reported

    Visual Learning in Limited-Label Regime.

    Get PDF
    PhD ThesesAbstract Deep learning algorithms and architectures have greatly advanced the state-of-the-art in a wide variety of computer vision tasks, such as object recognition and image retrieval. To achieve human- or even super-human-level performance in most visual recognition tasks, large collections of labelled data are generally required to formulate meaningful supervision signals for model training. The standard supervised learning paradigm, however, is undesired in several perspectives. First, constructing large-scale labelled datasets not only requires exhaustive manual annotation efforts, but may also be legally prohibited. Second, deep neural networks trained with full label supervision upon a limited amount of labelled data are weak at generalising to new unseen data captured from a different data distribution. This thesis targets at solving the critical problem of lacking sufficient label annotations in deep learning. More specifically, we investigate four different deep learning paradigms in limited-label regime, including close-set semisupervised learning, open-set semi-supervised learning, open-set cross-domain learning, and unsupervised learning. The former two paradigms are explored in visual classification, which aims to recognise different categories in the images; while the latter two paradigms are studied in visual search – particularly in person re-identification – which targets at discriminating different but similar persons in a finer-grained manner and can be extended to the discrimination of other objects of high visual similarities. We detail our studies of these paradigms as follows. Chapter 3: Close-Set Semi-Supervised Learning (Figure 1 (I)) is a fundamental semi-supervised learning paradigm that aims to learn from a small set of labelled data and a large set of unlabelled data, where the two sets are assumed to lie in the same label space. To address this problem, existing semi-supervised deep learning methods often rely on the up-to-date “network-in-training” to formulate the semi-supervised learning objective, which ignores both the disriminative feature representation and the model inference uncertainty revealed by the network in the preceding learning iterations, referred to as the memory of model learning. In this work, we proposed to augment the deep neural network with a lightweight memory mechanism [Chen et al., 2018b], which captures the underlying manifold structure of the labelled data at the per-class level, and further imposes auxiliary unsupervised constraints to fit the unlabelled data towards the underlying manifolds. This work established a simple yet efficient close-set semi-supervised deep learning scheme to boost model generalisation in visual classification by learning from sparsely labelled data and abundant unlabelled data. Chapter 4: Open-Set Semi-Supervised Learning (Figure 1 (II)) further explores the potential of learning from abundant noisy unlabelled data, While existing SSL methods artificially assume that small labelled data and large unlabelled data are drawn from the same class distribution, we consider a more realistic and uncurated open-set semi-supervised learning paradigm. Considering visual data is always growing in many visual recognition tasks, it is therefore implausible to pre-define a fixed label space for the unlabelled data in advance. To investigate this new chal4 Limited-Label Regime Same Label Space Labelled Data Pool Unlabelled Data Pool (I) Close-Set Semi-Supervised Learning Propagate Label Chapter 3 (II) Open-Set Semi-Supervised Learning Labelled Data Pool Unlabelled Partial Shared Data Pool Label Space Selectively Propagate Label (III) Open-Set Cross-Domain Learning Labelled Data Pool Unlabelled Data Pool Disjoint Label Space & Domains Transfer Label [Chen et al. ICCV19] Unknown Label Space Unlabelled Data Pool Discover Label [Chen et al. BMVC18] (IV) Unsupervised Learning Chapter 4 Chapter 6 Chapter 5 [Chen et al. ECCV18] [Chen et al. AAAI20] Figure 1: An overview of the main studies in this thesis, which covers four different deep learning paradigms in the limited-label regime, including (I) close-set semi-supervised learning (Chapter 3), (II) open-set semi-supervised learning (Chapter 4), (III) open-set cross-domain learning (Chapter 5), and (IV) unsupervised learning (Chapter 6). Each chapter studies a specific deep learning paradigm that requires to propagate, selectively propagate, transfer, or discover label information for model optimisation, so as to minimise the manual efforts for label annotations. While the former two paradigms focus on semi-supervised learning for visual classification, i.e. recognising different visual categories; the latter two paradigms focus on semi-supervised and unsupervised learning for visual search, i.e. discriminating different instances such as persons. lenging learning paradigm, we established the first systematic work to tackle the open-set semisupervised learning problem in visual classification by a novel approach: uncertainty-aware selfdistillation [Chen et al., 2020b], which selectively propagates the soft label assignments on the unlabelled visual data for model optimisation. Built upon an accumulative ensembling strategy, our approach can jointly capture the model uncertainty to discard out-of-distribution samples, and propagate less overconfident label assignments on the unlabelled data to avoid catastrophic error propagation. As one of the pioneers to explore this learning paradigm, this work opens up new avenues for research in more realistic semi-supervised learning scenarios. Chapter 5: Open-Set Cross-Domain Learning (Figure 1 (III)) is a challenging semi-supervised learning paradigm of great practical value. When training a visual recognition model in an operating visual environment (i.e. source domain, such as the laboratory, simulation, or known scene), and then deploying it to unknown real-world scenes (i.e. target domain), it is likely that the model would fail to generalise well in the unseen visual target domain, especially when the target domain data comes from a disjoint label space with heterogeneous domain drift. Unlike prior works in domain adaptation that mostly consider a shared label space across two domains, we studied the more demanding open-set domain adaptation problem, where both label spaces and domains are disjoint across the labelled and unlabelled datasets. To learn from these heterogeneous datasets, we designed a novel domain context rendering scheme for open-set cross-domain learning in visual search [Chen et al., 2019a] – particularly for person re-identification, i.e. a realistic testbed to evaluate the representational power of fine-grained discrimination among very similar instances. Our key idea is to transfer the source identity labels into diverse target domain 5 contexts. Our approach enables the generation of an abundant amount of synthetic training data that selectively blend label information from source domain and context information from target domain. By training upon such synthetic data, our model can learn a more identity-discriminative and context-invariant representation for effective visual search in the target domain. This work sets a new state-of-the-art in cross-domain person re-identification and provides a novel and generic solution for open-set domain adaptation. Chapter 6: Unsupervised Learning (Figure 1 (IV)) considers the learning scenario with none labelled data. In this work, we explore unsupervised learning in visual search, particularly for person re-identification, a realistic testbed to study unsupervised learning, where person identity labels are generally very difficult to acquire over a wide surveillance space [Chen et al., 2018a]. In contrast to existing methods in person re-identification that requires exhaustive manual efforts for labelling cross-view pairwise data, we aims to learn visual representations without using any manual labels. Our generic rationale is to formulate auxiliary supervision signals that learn to uncover the underlying data distribution, consequently grouping the visual data in a meaningful and structural way. To learn from the unlabelled data in a fully unsupervised manner, we proposed a novel deep association learning scheme to uncover the underlying data-to-data association. Specifically, two unsupervised constraints – temporal consistency and cycle consistency – are formulated upon neighbourhood consistency to progressively associate visual features within and across video sequences of tracked persons. This work sets the new state-of-the-art in videobased unsupervised person re-identification and advances the automatic exploitation of video data in real-world surveillance. In summary, the goal of all these studies is to build efficient and scalable visual learning models in the limited-label regime, which empower to learn more powerful and reliable representations from complex unlabelled visual data and consequently learn more powerful visual representations to facilitate better visual recognition and visual search

    Deep Visual Learning with Less Labeled Data

    Get PDF
    The rapid development of deep learning has revolutionized various vision tasks, but the success relies heavily on supervised training with large-scale labeled datasets, which can be costly and laborious to acquire. In this context, semi-supervised learning (SSL) has emerged as a promising approach to facilitating deep visual learning with less labeled data. Despite numerous research endeavours in SSL, some technical issues, e.g., the low unlabeled utilization and instance-discriminating, have not been well studied. This thesis emphasizes the cruciality of these issues and proposes new methods for semi-supervised classification (SSC) and semantic segmentation (SSS). In SSC, recent studies are limited in excluding samples with low-confidence predictions and underutilization of label information. Hence, we propose a Label-guided Self-training approach to SSL, which exploits label information to employ a class-aware contrastive loss and buffer-aided label propagation algorithm to fully utilize all unlabeled data. Furthermore, most SSC assumes labeled and unlabeled datasets share an identical class distribution, which is hard to meet in practice. The distribution mismatch between the two sets causes severe bias and performance degradation. We thus propose the Distribution Consistency SSL to address the mismatch from a distribution perspective. In SSS, most studies treat all unlabeled data equally and barely consider different training difficulties among unlabeled instances. We highlight instance differences and propose instance-specific and model-adaptive supervision for SSS. We also study semi-supervised medical image segmentation, where labeled data is scarce. Unlike current increasingly complicated methods, we propose a simple yet effective approach that applies data perturbation and model stabilization strategies to boost performance. Extensive experiments and ablation studies are conducted to verify the superiority of proposed methods on SSC and SSS benchmarks
    • …
    corecore