86 research outputs found

    MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

    Full text link
    Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working zones to resist the attack. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures - mixed MAT and parallel MAT - are developed to facilitate the tradeoffs between training time and memory occupation. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.Comment: 6 pages, 4 figures, 2 table

    Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting

    Full text link
    Recently, many semi-supervised object detection (SSOD) methods adopt teacher-student framework and have achieved state-of-the-art results. However, the teacher network is tightly coupled with the student network since the teacher is an exponential moving average (EMA) of the student, which causes a performance bottleneck. To address the coupling problem, we propose a Cycle Self-Training (CST) framework for SSOD, which consists of two teachers T1 and T2, two students S1 and S2. Based on these networks, a cycle self-training mechanism is built, i.e., S1→{\rightarrow}T1→{\rightarrow}S2→{\rightarrow}T2→{\rightarrow}S1. For S→{\rightarrow}T, we also utilize the EMA weights of the students to update the teachers. For T→{\rightarrow}S, instead of providing supervision for its own student S1(S2) directly, the teacher T1(T2) generates pseudo-labels for the student S2(S1), which looses the coupling effect. Moreover, owing to the property of EMA, the teacher is most likely to accumulate the biases from the student and make the mistakes irreversible. To mitigate the problem, we also propose a distribution consistency reweighting strategy, where pseudo-labels are reweighted based on distribution consistency across the teachers T1 and T2. With the strategy, the two students S2 and S1 can be trained robustly with noisy pseudo labels to avoid confirmation biases. Extensive experiments prove the superiority of CST by consistently improving the AP over the baseline and outperforming state-of-the-art methods by 2.1% absolute AP improvements with scarce labeled data.Comment: ACM Multimedia 202

    Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

    Full text link
    Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against â„“2\ell_2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets.Comment: AAAI202

    11β-HSD1 inhibition ameliorates diabetes-induced cardiomyocyte hypertrophy and cardiac fibrosis through modulation of EGFR activity

    Get PDF
    11β-HSD1 has been recognized as a potential therapeutic target for type 2 diabetes. Recent studies have shown that hyperglycemia leads to activation of 11β-HSD1, increasing the intracellular glucocorticoid levels. Excess glucocorticoids may lead to the clinical manifestations of cardiac injury. Therefore, the aim of this study is to investigate whether 11β-HSD1 activation contributes to the development of diabetic cardiomyopathy. To investigate the role of 11β-HSD1, we administered a selective 11β-HSD1 inhibitor in type 1 and type 2 murine models of diabetes and in cultured cardiomyocytes. Our results show that diabetes increases cortisone levels in heart tissues. 11β-HSD1 inhibitor decreased cortisone levels and ameliorated all structural and functional features of diabetic cardiomyopathy including fibrosis and hypertrophy. We also show that high levels of glucose caused cardiomyocyte hypertrophy and increased matrix protein deposition in culture. Importantly, inhibition of 11β-HSD1 attenuated these changes. Moreover, we show that 11β-HSD1 activation mediates these changes through modulating EGFR phosphorylation and activity. Our findings demonstrate that 11β-HSD1 contributes to the development of diabetic cardiomyopathy through activation of glucocorticoid and EGFR signaling pathway. These results suggest that inhibition of 11β-HSD1 might be a therapeutic strategy for diabetic cardiomyopathy, which is independent of its effects on glucose homeostasis
    • …
    corecore