169 research outputs found

    MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

    Full text link
    Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs. In this work, we revisit the DNN training process that includes adversarial examples into the training dataset so as to improve DNN's resilience to adversarial attacks, namely, adversarial training. Our experiments show that different adversarial strengths, i.e., perturbation levels of adversarial examples, have different working zones to resist the attack. Based on the observation, we propose a multi-strength adversarial training method (MAT) that combines the adversarial training examples with different adversarial strengths to defend adversarial attacks. Two training structures - mixed MAT and parallel MAT - are developed to facilitate the tradeoffs between training time and memory occupation. Our results show that MAT can substantially minimize the accuracy degradation of deep learning systems to adversarial attacks on MNIST, CIFAR-10, CIFAR-100, and SVHN.Comment: 6 pages, 4 figures, 2 table

    The Geochemical Data Imaging and Application in Geoscience: Taking the Northern Daxinganling Metallogenic Belt as an Example

    Get PDF
    Geochemical data were predominantly expressed by vector format, the research on geochemical data visualization, i.e., raster data format, was not paid proper attention. A total of 39 geochemical elements in 1:200,000 regional geochemical exploration data were rasterized to form images, and then a geochemical image database was generated. This article has carried out the study on geochemical imaging within Daxinganling metallogenic belt. The metallogenic belt had once carried out the regional geochemical survey, the sampling density was 1 site/4 km2, and 39 geochemistry elements including the microelement and trace element have been analyzed. Quintic polynomial method was used to implement the geochemical data interpolation, and the cell size of formed geochemical elemental image is 1 km. The images of the geochemical elements were processed by image enhancement methods, and then hyperspectral remote sensing data processing method was used for prospecting target selection, lithology mapping, and so on. The interpreted results have been verified in practice. All the abovementioned suggested a good development prospect for the rasterized geochemical images. Finally the author puts forward using rasterize geochemical images in combination with other geological, geophysical, and remote sensing data to make better use of the geochemical data and be more extensively applied in the geoscience

    Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

    Full text link
    Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against â„“2\ell_2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets.Comment: AAAI202

    Cycle Self-Training for Semi-Supervised Object Detection with Distribution Consistency Reweighting

    Full text link
    Recently, many semi-supervised object detection (SSOD) methods adopt teacher-student framework and have achieved state-of-the-art results. However, the teacher network is tightly coupled with the student network since the teacher is an exponential moving average (EMA) of the student, which causes a performance bottleneck. To address the coupling problem, we propose a Cycle Self-Training (CST) framework for SSOD, which consists of two teachers T1 and T2, two students S1 and S2. Based on these networks, a cycle self-training mechanism is built, i.e., S1→{\rightarrow}T1→{\rightarrow}S2→{\rightarrow}T2→{\rightarrow}S1. For S→{\rightarrow}T, we also utilize the EMA weights of the students to update the teachers. For T→{\rightarrow}S, instead of providing supervision for its own student S1(S2) directly, the teacher T1(T2) generates pseudo-labels for the student S2(S1), which looses the coupling effect. Moreover, owing to the property of EMA, the teacher is most likely to accumulate the biases from the student and make the mistakes irreversible. To mitigate the problem, we also propose a distribution consistency reweighting strategy, where pseudo-labels are reweighted based on distribution consistency across the teachers T1 and T2. With the strategy, the two students S2 and S1 can be trained robustly with noisy pseudo labels to avoid confirmation biases. Extensive experiments prove the superiority of CST by consistently improving the AP over the baseline and outperforming state-of-the-art methods by 2.1% absolute AP improvements with scarce labeled data.Comment: ACM Multimedia 202
    • …
    corecore