229 research outputs found
Universal Thermoelectric Effect of Dirac Fermions in Graphene
We numerically study the thermoelectric transports of Dirac fermions in
graphene in the presence of a strong magnetic field and disorder. We find that
the thermoelectric transport coefficients demonstrate universal behavior
depending on the ratio between the temperature and the width of the
disorder-broadened Landau levels(LLs). The transverse thermoelectric
conductivity reaches a universal quantum value at the center of
each LL in the high temperature regime, and it has a linear temperature
dependence at low temperatures. The calculated Nernst signal has a peak at the
central LL with heights of the order of , and changes sign near other
LLs, while the thermopower has an opposite behavior, in good agreement with
experimental data. The validity of the generalized Mott relation between the
thermoelectric and electrical transport coefficients is verified in a wide
range of temperatures.Comment: 4 pages, 4 figures, published versio
Self-training solutions for the ICCV 2023 GeoNet Challenge
GeoNet is a recently proposed domain adaptation benchmark consisting of three
challenges (i.e., GeoUniDA, GeoImNet, and GeoPlaces). Each challenge contains
images collected from the USA and Asia where there are huge geographical gaps.
Our solution adopts a two-stage source-free domain adaptation framework with a
Swin Transformer backbone to achieve knowledge transfer from the USA (source)
domain to Asia (target) domain. In the first stage, we train a source model
using labeled source data with a re-sampling strategy and two types of
cross-entropy loss. In the second stage, we generate pseudo labels for
unlabeled target data to fine-tune the model. Our method achieves an H-score of
74.56% and ultimately ranks 1st in the GeoUniDA challenge. In GeoImNet and
GeoPlaces challenges, our solution also reaches a top-3 accuracy of 64.46% and
51.23%, respectively.Comment: technical report; 1st in the ICCV-2023 GeoUniDA challeng
Towards Realistic Unsupervised Fine-tuning with CLIP
The emergence of vision-language models (VLMs), such as CLIP, has spurred a
significant research effort towards their application for downstream supervised
learning tasks. Although some previous studies have explored the unsupervised
fine-tuning of CLIP, they often rely on prior knowledge in the form of class
names associated with ground truth labels. In this paper, we delve into a
realistic unsupervised fine-tuning scenario by assuming that the unlabeled data
might contain out-of-distribution samples from unknown classes. Furthermore, we
emphasize the importance of simultaneously enhancing out-of-distribution
detection capabilities alongside the recognition of instances associated with
predefined class labels.
To tackle this problem, we present a simple, efficient, and effective
fine-tuning approach called Universal Entropy Optimization (UEO). UEO leverages
sample-level confidence to approximately minimize the conditional entropy of
confident instances and maximize the marginal entropy of less confident
instances. Apart from optimizing the textual prompts, UEO also incorporates
optimization of channel-wise affine transformations within the visual branch of
CLIP. Through extensive experiments conducted across 15 domains and 4 different
types of prior knowledge, we demonstrate that UEO surpasses baseline methods in
terms of both generalization and out-of-distribution detection
AdaptGuard: Defending Against Universal Attacks for Model Adaptation
Model adaptation aims at solving the domain transfer problem under the
constraint of only accessing the pretrained source models. With the increasing
considerations of data privacy and transmission efficiency, this paradigm has
been gaining recent popularity. This paper studies the vulnerability to
universal attacks transferred from the source domain during model adaptation
algorithms due to the existence of malicious providers. We explore both
universal adversarial perturbations and backdoor attacks as loopholes on the
source side and discover that they still survive in the target models after
adaptation. To address this issue, we propose a model preprocessing framework,
named AdaptGuard, to improve the security of model adaptation algorithms.
AdaptGuard avoids direct use of the risky source parameters through knowledge
distillation and utilizes the pseudo adversarial samples under adjusted radius
to enhance the robustness. AdaptGuard is a plug-and-play module that requires
neither robust pretrained models nor any changes for the following model
adaptation algorithms. Extensive results on three commonly used datasets and
two popular adaptation methods validate that AdaptGuard can effectively defend
against universal attacks and maintain clean accuracy in the target domain
simultaneously. We hope this research will shed light on the safety and
robustness of transfer learning. Code is available at
https://github.com/TomSheng21/AdaptGuard.Comment: ICCV202
Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation
Model adaptation tackles the distribution shift problem with a pre-trained
model instead of raw data, becoming a popular paradigm due to its great privacy
protection. Existing methods always assume adapting to a clean target domain,
overlooking the security risks of unlabeled samples. In this paper, we explore
the potential backdoor attacks on model adaptation launched by well-designed
poisoning target data. Concretely, we provide two backdoor triggers with two
poisoning strategies for different prior knowledge owned by attackers. These
attacks achieve a high success rate and keep the normal performance on clean
samples in the test stage. To defend against backdoor embedding, we propose a
plug-and-play method named MixAdapt, combining it with existing adaptation
algorithms. Experiments across commonly used benchmarks and adaptation methods
demonstrate the effectiveness of MixAdapt. We hope this work will shed light on
the safety of learning with unlabeled data.Comment: 11 pages, 4 figure
A Novel Coordinate Transformation Based Self-coupling Computation Approach For The Method Of Moments
A new highly accurate and efficient coordinate transformation algorithm is proposed for the evaluation of the self-coupling in the Method of Moments (MoM), which produces usually the strongest contributions to the MoM system matrices. The new algorithm provides an effective solution to remove the singularity due to the Green\u27s function inside the self-couplings. Moreover, the new solution reduces the integral dimension from 4-D to 1-D. Thus, better accuracy and efficiency are obtained for the self-coupling integrals
- …