21,016 research outputs found

    On Fine-tuned Deep Features for Unsupervised Domain Adaptation

    Get PDF
    Prior feature transformation based approaches to Unsupervised Domain Adaptation (UDA) employ the deep features extracted by pre-trained deep models without fine-tuning them on the specific source or target domain data for a particular domain adaptation task. In contrast, end-to-end learning based approaches optimise the pre-trained backbones and the customised adaptation modules simultaneously to learn domaininvariant features for UDA. In this work, we explore the potential of combining fine-tuned features and feature transformation based UDA methods for improved domain adaptation performance. Specifically, we integrate the prevalent progressive pseudo-labelling techniques into the fine-tuning framework to extract fine-tuned features which are subsequently used in a state-of-the-art feature transformation based domain adaptation method SPL (Selective Pseudo-Labeling). Thorough experiments with multiple deep models including ResNet-50/101 and DeiTsmall/base are conducted to demonstrate the combination of finetuned features and SPL can achieve state-of-the-art performance on several benchmark datasets

    Data Augmentation with norm-VAE and Selective Pseudo-Labelling for Unsupervised Domain Adaptation

    Get PDF
    We address the Unsupervised Domain Adaptation (UDA) problem in image classification from a new perspective. In contrast to most existing works which either align the data distributions or learn domain-invariant features, we directly learn a unified classifier for both the source and target domains in the high-dimensional homogeneous feature space without explicit domain alignment. To this end, we employ the effective Selective Pseudo-Labelling (SPL) technique to take advantage of the unlabelled samples in the target domain. Surprisingly, data distribution discrepancy across the source and target domains can be well handled by a computationally simple classifier (e.g., a shallow Multi-Layer Perceptron) trained in the original feature space. Besides, we propose a novel generative model norm-AE to generate synthetic features for the target domain as a data augmentation strategy to enhance the classifier training. Experimental results on several benchmark datasets demonstrate the pseudo-labelling strategy itself can lead to comparable performance to many state-of-the-art methods whilst the use of norm-AE for feature augmentation can further improve the performance in most cases. As a result, our proposed methods (i.e. naiveSPL and norm-AE-SPL) can achieve comparable performance with state-of-the-art methods with the average accuracy of 93.4% and 90.4% on Office-Caltech and ImageCLEF-DA datasets, and achieve competitive performance on Digits, Office31 and Office-Home datasets with the average accuracy of 97.2%, 87.6% and 68.6% respectively

    Overcoming laser diode nonlinearity issues in multi-channel radio-over-fiber systems

    Get PDF
    The authors demonstrate how external light injection into a directly modulated laser diode may be used to enhance the performance of a multi-channel radio-over-fiber system operating at a frequency of 6 GHz. Performance improvements of up to 2 dB were achieved by linearisation of the lasers-modulation response. To verify the experimental work a simulation of the complete system was carried out using Matlab. Good correlation was observed between experimental and simulated results

    Observational constraints on cosmic neutrinos and dark energy revisited

    Full text link
    Using several cosmological observations, i.e. the cosmic microwave background anisotropies (WMAP), the weak gravitational lensing (CFHTLS), the measurements of baryon acoustic oscillations (SDSS+WiggleZ), the most recent observational Hubble parameter data, the Union2.1 compilation of type Ia supernovae, and the HST prior, we impose constraints on the sum of neutrino masses (\mnu), the effective number of neutrino species (\neff) and dark energy equation of state (ww), individually and collectively. We find that a tight upper limit on \mnu can be extracted from the full data combination, if \neff and ww are fixed. However this upper bound is severely weakened if \neff and ww are allowed to vary. This result naturally raises questions on the robustness of previous strict upper bounds on \mnu, ever reported in the literature. The best-fit values from our most generalized constraint read \mnu=0.556^{+0.231}_{-0.288}\rm eV, \neff=3.839\pm0.452, and w=1.058±0.088w=-1.058\pm0.088 at 68% confidence level, which shows a firm lower limit on total neutrino mass, favors an extra light degree of freedom, and supports the cosmological constant model. The current weak lensing data are already helpful in constraining cosmological model parameters for fixed ww. The dataset of Hubble parameter gains numerous advantages over supernovae when w=1w=-1, particularly its illuminating power in constraining \neff. As long as ww is included as a free parameter, it is still the standardizable candles of type Ia supernovae that play the most dominant role in the parameter constraints.Comment: 39 pages, 15 figures, 7 tables, accepted to JCA
    corecore