1,816 research outputs found

    Tuning thermal transport in nanotubes with topological defects

    Full text link
    Using the atomistic nonequilibrium Green's function, we find that thermal conductance of carbon nanotubes with presence of topological lattice imperfects is remarkably reduced, due to the strong Rayleigh scattering of high-frequency phonons. Phonon transmission across multiple defects behaves as a cascade scattering based with the random phase approximation. We elucidate that phonon scattering by structural defects is related to the spatial fluctuations of local vibrational density of states (LVDOS). An effective method of tuning thermal transport in low-dimensional systems through the modulation of LVDOS has been proposed. Our findings provide insights into experimentally controlling thermal transport in nanoscale devicesComment: 10 pages, 3 figure

    Self-training solutions for the ICCV 2023 GeoNet Challenge

    Full text link
    GeoNet is a recently proposed domain adaptation benchmark consisting of three challenges (i.e., GeoUniDA, GeoImNet, and GeoPlaces). Each challenge contains images collected from the USA and Asia where there are huge geographical gaps. Our solution adopts a two-stage source-free domain adaptation framework with a Swin Transformer backbone to achieve knowledge transfer from the USA (source) domain to Asia (target) domain. In the first stage, we train a source model using labeled source data with a re-sampling strategy and two types of cross-entropy loss. In the second stage, we generate pseudo labels for unlabeled target data to fine-tune the model. Our method achieves an H-score of 74.56% and ultimately ranks 1st in the GeoUniDA challenge. In GeoImNet and GeoPlaces challenges, our solution also reaches a top-3 accuracy of 64.46% and 51.23%, respectively.Comment: technical report; 1st in the ICCV-2023 GeoUniDA challeng

    One-loop Helicity Amplitudes for Top Quark Pair Production in Randall-Sundrum Model

    Full text link
    In this paper, we show how to calculate analytically the one-loop helicity amplitudes for the process qqˉrightarrowttˉq\bar{q} rightarrow t\bar{t} induced by KK gluon, using the spinor-helicity formalism. A minimal set of Feynman rules which are uniquely fixed by gauge invariance and the color representation of the KK gluon are derived and used in the calculation. Our results can be applied to a variety of models containing a massive color octet vector boson.Comment: 37 pages, 10 figures, journal versio

    A variation of a classical Turán-type extremal problem

    Get PDF
    AbstractA variation of a classical Turán-type extremal problem (Erdős on Graphs: His Legacy of Unsolved Problems (1998) p. 36) is considered as follows: determine the smallest even integer σ(Kr,s,n) such that every n-term graphic non-increasing sequence π=(d1,d2,…,dn) with term sum σ(π)=d1+d2+⋯+dn≥σ(Kr,s,n) has a realization G containing Kr,s as a subgraph, where Kr,s is a r×s complete bipartite graph. In this paper, we determine σ(Kr,s,n) exactly for every fixed s≥r≥3 when n≥n0(r,s), where m=[(r+s+1)24] andn0(r,s)=m+3s2−2s−6,ifs≤2randsis even,m+3s2+2s−8,ifs≤2randsis odd,m+2s2+(2r−6)s+4r−8,ifs≥2r+1

    PseudoCal: A Source-Free Approach to Unsupervised Uncertainty Calibration in Domain Adaptation

    Full text link
    Unsupervised domain adaptation (UDA) has witnessed remarkable advancements in improving the accuracy of models for unlabeled target domains. However, the calibration of predictive uncertainty in the target domain, a crucial aspect of the safe deployment of UDA models, has received limited attention. The conventional in-domain calibration method, \textit{temperature scaling} (TempScal), encounters challenges due to domain distribution shifts and the absence of labeled target domain data. Recent approaches have employed importance-weighting techniques to estimate the target-optimal temperature based on re-weighted labeled source data. Nonetheless, these methods require source data and suffer from unreliable density estimates under severe domain shifts, rendering them unsuitable for source-free UDA settings. To overcome these limitations, we propose PseudoCal, a source-free calibration method that exclusively relies on unlabeled target data. Unlike previous approaches that treat UDA calibration as a \textit{covariate shift} problem, we consider it as an unsupervised calibration problem specific to the target domain. Motivated by the factorization of the negative log-likelihood (NLL) objective in TempScal, we generate a labeled pseudo-target set that captures the structure of the real target. By doing so, we transform the unsupervised calibration problem into a supervised one, enabling us to effectively address it using widely-used in-domain methods like TempScal. Finally, we thoroughly evaluate the calibration performance of PseudoCal by conducting extensive experiments on 10 UDA methods, considering both traditional UDA settings and recent source-free UDA scenarios. The experimental results consistently demonstrate the superior performance of PseudoCal, exhibiting significantly reduced calibration error compared to existing calibration methods

    Towards Realistic Unsupervised Fine-tuning with CLIP

    Full text link
    The emergence of vision-language models (VLMs), such as CLIP, has spurred a significant research effort towards their application for downstream supervised learning tasks. Although some previous studies have explored the unsupervised fine-tuning of CLIP, they often rely on prior knowledge in the form of class names associated with ground truth labels. In this paper, we delve into a realistic unsupervised fine-tuning scenario by assuming that the unlabeled data might contain out-of-distribution samples from unknown classes. Furthermore, we emphasize the importance of simultaneously enhancing out-of-distribution detection capabilities alongside the recognition of instances associated with predefined class labels. To tackle this problem, we present a simple, efficient, and effective fine-tuning approach called Universal Entropy Optimization (UEO). UEO leverages sample-level confidence to approximately minimize the conditional entropy of confident instances and maximize the marginal entropy of less confident instances. Apart from optimizing the textual prompts, UEO also incorporates optimization of channel-wise affine transformations within the visual branch of CLIP. Through extensive experiments conducted across 15 domains and 4 different types of prior knowledge, we demonstrate that UEO surpasses baseline methods in terms of both generalization and out-of-distribution detection

    AdaptGuard: Defending Against Universal Attacks for Model Adaptation

    Full text link
    Model adaptation aims at solving the domain transfer problem under the constraint of only accessing the pretrained source models. With the increasing considerations of data privacy and transmission efficiency, this paradigm has been gaining recent popularity. This paper studies the vulnerability to universal attacks transferred from the source domain during model adaptation algorithms due to the existence of malicious providers. We explore both universal adversarial perturbations and backdoor attacks as loopholes on the source side and discover that they still survive in the target models after adaptation. To address this issue, we propose a model preprocessing framework, named AdaptGuard, to improve the security of model adaptation algorithms. AdaptGuard avoids direct use of the risky source parameters through knowledge distillation and utilizes the pseudo adversarial samples under adjusted radius to enhance the robustness. AdaptGuard is a plug-and-play module that requires neither robust pretrained models nor any changes for the following model adaptation algorithms. Extensive results on three commonly used datasets and two popular adaptation methods validate that AdaptGuard can effectively defend against universal attacks and maintain clean accuracy in the target domain simultaneously. We hope this research will shed light on the safety and robustness of transfer learning. Code is available at https://github.com/TomSheng21/AdaptGuard.Comment: ICCV202

    Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation

    Full text link
    Model adaptation tackles the distribution shift problem with a pre-trained model instead of raw data, becoming a popular paradigm due to its great privacy protection. Existing methods always assume adapting to a clean target domain, overlooking the security risks of unlabeled samples. In this paper, we explore the potential backdoor attacks on model adaptation launched by well-designed poisoning target data. Concretely, we provide two backdoor triggers with two poisoning strategies for different prior knowledge owned by attackers. These attacks achieve a high success rate and keep the normal performance on clean samples in the test stage. To defend against backdoor embedding, we propose a plug-and-play method named MixAdapt, combining it with existing adaptation algorithms. Experiments across commonly used benchmarks and adaptation methods demonstrate the effectiveness of MixAdapt. We hope this work will shed light on the safety of learning with unlabeled data.Comment: 11 pages, 4 figure
    corecore