38,125 research outputs found

    A review of domain adaptation without target labels

    Full text link
    Domain adaptation has become a prominent problem setting in machine learning and related fields. This review asks the question: how can a classifier learn from a source domain and generalize to a target domain? We present a categorization of approaches, divided into, what we refer to as, sample-based, feature-based and inference-based methods. Sample-based methods focus on weighting individual observations during training based on their importance to the target domain. Feature-based methods revolve around on mapping, projecting and representing features such that a source classifier performs well on the target domain and inference-based methods incorporate adaptation into the parameter estimation procedure, for instance through constraints on the optimization procedure. Additionally, we review a number of conditions that allow for formulating bounds on the cross-domain generalization error. Our categorization highlights recurring ideas and raises questions important to further research.Comment: 20 pages, 5 figure

    Semantically Consistent Regularization for Zero-Shot Recognition

    Full text link
    The role of semantics in zero-shot learning is considered. The effectiveness of previous approaches is analyzed according to the form of supervision provided. While some learn semantics independently, others only supervise the semantic subspace explained by training classes. Thus, the former is able to constrain the whole space but lacks the ability to model semantic correlations. The latter addresses this issue but leaves part of the semantic space unsupervised. This complementarity is exploited in a new convolutional neural network (CNN) framework, which proposes the use of semantics as constraints for recognition.Although a CNN trained for classification has no transfer ability, this can be encouraged by learning an hidden semantic layer together with a semantic code for classification. Two forms of semantic constraints are then introduced. The first is a loss-based regularizer that introduces a generalization constraint on each semantic predictor. The second is a codeword regularizer that favors semantic-to-class mappings consistent with prior semantic knowledge while allowing these to be learned from data. Significant improvements over the state-of-the-art are achieved on several datasets.Comment: Accepted to CVPR 201

    Damage localisation using disparate damage states via domain adaptation

    Get PDF
    A significant challenge of structural health monitoring (SHM) is the lack of labeled data collected from damage states. Consequently, the collected data can be incomplete, making it difficult to undertake machine learning tasks, to detect or predict the full range of damage states a structure may experience. Transfer learning is a helpful solution, where data from (source) structures containing damage labels can be used to transfer knowledge to (target) structures, for which damage labels do not exist. Machine learning models are then developed that generalize to the target structure. In practical applications, it is unlikely that the source and the target structures contain the same damage states or experience the same environmental and operational conditions, which can significantly impact the collected data. This is the first study to explore the possibility of transfer learning for damage localisation in SHM when the damage states and the environmental variations in the source and target datasets are disparate. Specifically, using several domain adaptation methods, this article localizes severe damage states at a target structure, using labeled information from minor damage states at a source structure. By minimizing the distance between the marginal and conditional distributions between the source and the target structures, this article successfully localizes damage states of disparate severities, under varying environmental and operational conditions. The effect of partial and universal domain adaptation—where the number of damage states in the source and target datasets differ—is also explored in order to mimic realistic industrial applications of these methods

    Statistical alignment in transfer learning to address the repair problem: An experimental case study

    Get PDF
    Repair is a critical step in maintenance of civil structures to ensure safe operation. However, repair can pose a problem for data-driven approaches of long-term structural health monitoring, because repairs can change the underlying distributions of the data, which can invalidate models trained on pre-repair data. As a result, models previously trained on pre-repair information fail to generalise to post-repair data, reducing their performances and misrepresenting the actual behaviour of structures. This paper suggests a population-based structural health monitoring approach to address the problem of repair in long-term monitoring of a mast structure, by exploring domain adaptation techniques developed for transfer learning. A combined approach of normal condition alignment and Dirichlet process mixture models is adopted here for damage detection, that can operate unimpeded by post-repair shifts in distributions. The method is able correctly identify 99\% of the damage data with a false positive rate of around 1.6%. Moreover, it is able to detect environmental variations such as stiffening due to freezing conditions that can adversely affect the dynamic behaviour of structures
    • …
    corecore