65 research outputs found

    Robust Tickets Can Transfer Better: Drawing More Transferable Subnetworks in Transfer Learning

    Full text link
    Transfer learning leverages feature representations of deep neural networks (DNNs) pretrained on source tasks with rich data to empower effective finetuning on downstream tasks. However, the pretrained models are often prohibitively large for delivering generalizable representations, which limits their deployment on edge devices with constrained resources. To close this gap, we propose a new transfer learning pipeline, which leverages our finding that robust tickets can transfer better, i.e., subnetworks drawn with properly induced adversarial robustness can win better transferability over vanilla lottery ticket subnetworks. Extensive experiments and ablation studies validate that our proposed transfer learning pipeline can achieve enhanced accuracy-sparsity trade-offs across both diverse downstream tasks and sparsity patterns, further enriching the lottery ticket hypothesis.Comment: Accepted by DAC 202

    Relationship between Model Compression and Adversarial Robustness: A Review of Current Evidence

    Full text link
    Increasing the model capacity is a known approach to enhance the adversarial robustness of deep learning networks. On the other hand, various model compression techniques, including pruning and quantization, can reduce the size of the network while preserving its accuracy. Several recent studies have addressed the relationship between model compression and adversarial robustness, while some experiments have reported contradictory results. This work summarizes available evidence and discusses possible explanations for the observed effects.Comment: Accepted for publication at SSCI 202

    Adversarial Momentum-Contrastive Pre-Training for Robust Feature Extraction

    Full text link
    Recently proposed adversarial self-supervised learning methods usually require big batches and long training epochs to extract robust features, which is not friendly in practical application. In this paper, we present a novel adversarial momentum-contrastive learning approach that leverages two memory banks to track the invariant features across different mini-batches. These memory banks can be efficiently incorporated into each iteration and help the network to learn more robust feature representations with smaller batches and far fewer epochs. Furthermore, after fine-tuning on the classification tasks, the proposed approach can meet or exceed the performance of some state-of-the-art supervised baselines on real world datasets. Our code is available at \url{https://github.com/MTandHJ/amoc}.Comment: 16 pages;6 figures; preprin
    • …
    corecore