16 research outputs found
Multi-Source and Source-Private Cross-Domain Learning For Visual Recognition
Indiana University-Purdue University Indianapolis (IUPUI)Domain adaptation is one of the hottest directions in solving annotation insufficiency problem of deep learning. General domain adaptation is not consistent with the practical scenarios in the industry. In this thesis, we focus on two concerns as below.
First is that labeled data are generally collected from multiple domains. In other words, multi-source adaptation is a more common situation. Simply extending these single-source approaches to the multi-source cases could cause sub-optimal inference, so specialized multi-source adaptation methods are essential. The main challenge in the multi-source scenario is a more complex divergence situation. Not only the divergence between target and each source plays a role, but the divergences among distinct sources matter as well. However, the significance of maintaining consistency among multiple sources didn't gain enough attention in previous work. In this thesis, we propose an Enhanced Consistency Multi-Source Adaptation (EC-MSA) framework to address it from three perspectives. First, we mitigate feature-level discrepancy by cross-domain conditional alignment, narrowing the divergence between each source and target domain class-wisely. Second, we enhance multi-source consistency via dual mix-up, diminishing the disagreements among different sources. Third, we deploy a target distilling mechanism to handle the uncertainty of target prediction, aiming to provide high-quality pseudo-labeled target samples to benefit the previous two aspects. Extensive experiments are conducted on several common benchmark datasets and demonstrate that our model outperforms the state-of-the-art methods.
Second is that data privacy and security is necessary in practice. That is, we hope to keep the raw data stored locally while can still obtain a satisfied model. In such a case, the risk of data leakage greatly decreases. Therefore, it is natural for us to combine the federated learning paradigm with domain adaptation. Under the source-private setting, the main challenge for us is to expose information from the source domain to the target domain while make sure that the communication process is safe enough. In this thesis, we propose a method named Fourier Transform-Assisted Federated Domain Adaptation (FTA-FDA) to alleviate the difficulties in two ways. We apply Fast Fourier Transform to the raw data and transfer only the amplitude spectra during the communication. Then frequency space interpolations between these two domains are conducted, minimizing the discrepancies while ensuring the contact of them and keeping raw data safe. What's more, we make prototype alignments by using the model weights together with target features, trying to reduce the discrepancy in the class level. Experiments on Office-31 demonstrate the effectiveness and competitiveness of our approach, and further analyses prove that our algorithm can help protect privacy and security
Source-free Domain Adaptive Human Pose Estimation
Human Pose Estimation (HPE) is widely used in various fields, including
motion analysis, healthcare, and virtual reality. However, the great expenses
of labeled real-world datasets present a significant challenge for HPE. To
overcome this, one approach is to train HPE models on synthetic datasets and
then perform domain adaptation (DA) on real-world data. Unfortunately, existing
DA methods for HPE neglect data privacy and security by using both source and
target data in the adaptation process. To this end, we propose a new task,
named source-free domain adaptive HPE, which aims to address the challenges of
cross-domain learning of HPE without access to source data during the
adaptation process. We further propose a novel framework that consists of three
models: source model, intermediate model, and target model, which explores the
task from both source-protect and target-relevant perspectives. The
source-protect module preserves source information more effectively while
resisting noise, and the target-relevant module reduces the sparsity of spatial
representations by building a novel spatial probability space, and
pose-specific contrastive learning and information maximization are proposed on
the basis of this space. Comprehensive experiments on several domain adaptive
HPE benchmarks show that the proposed method outperforms existing approaches by
a considerable margin. The codes are available at
https://github.com/davidpengucf/SFDAHPE.Comment: Accepted by ICCV 202
RAIN: RegulArization on Input and Network for Black-Box Domain Adaptation
Source-Free domain adaptation transits the source-trained model towards
target domain without exposing the source data, trying to dispel these concerns
about data privacy and security. However, this paradigm is still at risk of
data leakage due to adversarial attacks on the source model. Hence, the
Black-Box setting only allows to use the outputs of source model, but still
suffers from overfitting on the source domain more severely due to source
model's unseen weights. In this paper, we propose a novel approach named RAIN
(RegulArization on Input and Network) for Black-Box domain adaptation from both
input-level and network-level regularization. For the input-level, we design a
new data augmentation technique as Phase MixUp, which highlights task-relevant
objects in the interpolations, thus enhancing input-level regularization and
class consistency for target models. For network-level, we develop a Subnetwork
Distillation mechanism to transfer knowledge from the target subnetwork to the
full target network via knowledge distillation, which thus alleviates
overfitting on the source domain by learning diverse target representations.
Extensive experiments show that our method achieves state-of-the-art
performance on several cross-domain benchmarks under both single- and
multi-source black-box domain adaptation.Comment: Accepted by IJCAI 202
Ecological risk of human health in sediments in a karstic river basin with high longevity population
Health and longevity are common human goals, and environmental factors can have significant impacts on human health. This study aims to investigate the historical changes and sources of trace elements in the sediments of a typical karstic river basin with high longevity population in Hechi City, Guangxi, China and to evaluate the ecological risks of trace elements in sediments. The results showed that over the past 100 years, the contents of trace elements in the sediments were lower in the upper reaches than in the middle and lower reaches of the river. The sediments had high trace element contents in 1950–1959 and 1989–1998, while low contents appeared after 1998. These periods correspond to China's industrial growth in the early 1950s, the Great Leap Forward movement in the late 1950s, the reform and opening-up policy implemented in the 1980s–1990s and the environmental protection policies to strengthen pollution control that have been implemented since 2000. Limestone soil and carbonate rock are the main sources of sediment in the basin. Although the geological background values of Cd and other trace elements in the basin were relatively high, the high calcium content and alkalinity of the water and sediment in the basin reduced the bioavailability of Cd and other heavy metals. The mainstream of Panyang River had a low environmental risk, but the tributary Bama River where there is dense population poses a moderate risk
GaitSADA: Self-Aligned Domain Adaptation for mmWave Gait Recognition
mmWave radar-based gait recognition is a novel user identification method
that captures human gait biometrics from mmWave radar return signals. This
technology offers privacy protection and is resilient to weather and lighting
conditions. However, its generalization performance is yet unknown and limits
its practical deployment. To address this problem, in this paper, a
non-synthetic dataset is collected and analyzed to reveal the presence of
spatial and temporal domain shifts in mmWave gait biometric data, which
significantly impacts identification accuracy. To mitigate this issue, a novel
self-aligned domain adaptation method called GaitSADA is proposed. GaitSADA
improves system generalization performance by using a two-stage semi-supervised
model training approach. The first stage employs semi-supervised contrastive
learning to learn a compact gait representation from both source and target
domain data, aligning source-target domain distributions implicitly. The second
stage uses semi-supervised consistency training with centroid alignment to
further close source-target domain gap by pseudo-labelling the target-domain
samples, clustering together the samples belonging to the same class but from
different domains, and pushing the class centroid close to the weight vector of
each class. Experiments show that GaitSADA outperforms representative domain
adaptation methods with an improvement ranging from 15.41\% to 26.32\% on
average accuracy in low data regimes. Code and dataset will be available at
https://exitudio.github.io/GaitSADAComment: Submitted to IEEE MASS 202
Ecological risk of human health in sediments in a karstic river basin with high longevity population
Transverse polarisation measurement of hyperons in Ne collisions at =68.4 GeV with the LHCb detector
A measurement of the transverse polarization of the and hyperons in Ne fixed-target collisions at =68.4 GeV is presented using data collected by the LHCb detector. The polarization is studied using the decay together with its charge conjugated process, the integrated values measured are Furthermore, the results are shown as a function of the Feynman variable, transverse momentum, pseudorapidity and rapidity of the hyperons, and are compared with previous measurements.A measurement of the transverse polarization of the and hyperons in Ne fixed-target collisions at = 68.4 GeV is presented using data collected by the LHCb detector. The polarization is studied using the decay together with its charge conjugated process, the integrated values measured are
Furthermore, the results are shown as a function of the Feynman~~variable, transverse momentum, pseudorapidity and rapidity of the hyperons, and are compared with previous measurements
Measurement of mixing and search for violation with decays
International audienceA measurement of the time-dependent ratio of the to decay rates is reported. The analysis uses a sample of proton-proton collisions corresponding to an integrated luminosity of 6 fb recorded by the LHCb experiment from 2015 through 2018 at a center-of-mass energy of 13 TeV. The meson is required to originate from a decay, such that its flavor at production is inferred from the charge of the accompanying pion. The measurement is performed simultaneously for the and final states, allowing both mixing and -violation parameters to be determined. The value of the ratio of the decay rates at production is determined to be . The mixing parameters are measured to be and , where is the linear coefficient of the expansion of the ratio as a function of decay time in units of the lifetime, and is the quadratic coefficient, both averaged between the and final states. The precision is improved relative to the previous best measurement by approximately 60%. No evidence for violation is found
Study of -hadron decays to final states
International audienceDecays of and baryons to final states, with being , and meson pairs, are searched for using data collected with the LHCb detector. The data sample studied corresponds to an integrated luminosity of of collisions collected at centre-of-mass energies , and . The products of the relative branching fractions and fragmentation fractions for each signal mode, relative to the mode, are measured, with , and decays being observed at over significance. The mode is also used to measure the production asymmetry, which is found to be consistent with zero. In addition, the decay is observed for the first time, and its branching fraction is measured relative to that of the mode
Comprehensive analysis of local and nonlocal amplitudes in the decay
International audienceA comprehensive study of the local and nonlocal amplitudes contributing to the decay is performed by analysing the phase-space distribution of the decay products. The analysis is based on \proton\proton collision data corresponding to an integrated luminosity of 8.4fb collected by the LHCb experiment. This measurement employs for the first time a model of both one-particle and two-particle nonlocal amplitudes, and utilises the complete dimuon mass spectrum without any veto regions around the narrow charmonium resonances. In this way it is possible to explicitly isolate the local and nonlocal contributions and capture the interference between them. The results show that interference with nonlocal contributions, although larger than predicted, only has a minor impact on the Wilson Coefficients determined from the fit to the data. For the local contributions, the Wilson Coefficient , responsible for vector dimuon currents, exhibits a deviation from the Standard Model expectation. The Wilson Coefficients , and are all in better agreement than with the Standard Model and the global significance is at the level of . The model used also accounts for nonlocal contributions from rescattering, resulting in the first direct measurement of the vector effective-coupling