320 research outputs found

    Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks

    Get PDF
    Under the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has been advanced, to rapidly and accurately detect COVID-19 cases, because the need for expert radiologists, who are limited in number, forms a bottleneck for the screening. However, so far, the vulnerability of DNN-based systems has been poorly evaluated, although DNNs are vulnerable to a single perturbation, called universal adversarial perturbation (UAP), which can induce DNN failure in most classification tasks. Thus, we focus on representative DNN models for detecting COVID-19 cases from chest X-ray images and evaluate their vulnerability to UAPs generated using simple iterative algorithms. We consider nontargeted UAPs, which cause a task failure resulting in an input being assigned an incorrect label, and targeted UAPs, which cause the DNN to classify an input into a specific class. The results demonstrate that the models are vulnerable to nontargeted and targeted UAPs, even in case of small UAPs. In particular, 2% norm of the UPAs to the average norm of an image in the image dataset achieves >85% and >90% success rates for the nontargeted and targeted attacks, respectively. Due to the nontargeted UAPs, the DNN models judge most chest X-ray images as COVID-19 cases. The targeted UAPs make the DNN models classify most chest X-ray images into a given target class. The results indicate that careful consideration is required in practical applications of DNNs to COVID-19 diagnosis; in particular, they emphasize the need for strategies to address security concerns. As an example, we show that iterative fine-tuning of the DNN models using UAPs improves the robustness of the DNN models against UAPs.Comment: 17 pages, 5 figures, 3 table

    Simple iterative method for generating targeted universal adversarial perturbations

    Get PDF
    Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, a single perturbation known as the universal adversarial perturbation (UAP) can foil most classification tasks conducted by DNNs. Thus, different methods for generating UAPs are required to fully evaluate the vulnerability of DNNs. A realistic evaluation would be with cases that consider targeted attacks; wherein the generated UAP causes DNN to classify an input into a specific class. However, the development of UAPs for targeted attacks has largely fallen behind that of UAPs for non-targeted attacks. Therefore, we propose a simple iterative method to generate UAPs for targeted attacks. Our method combines the simple iterative method for generating non-targeted UAPs and the fast gradient sign method for generating a targeted adversarial perturbation for an input. We applied the proposed method to state-of-the-art DNN models for image classification and proved the existence of almost imperceptible UAPs for targeted attacks; further, we demonstrated that such UAPs are easily generatable.Comment: 4 pages, 3 figures, 1 tabl

    Simple iterative method for generating targeted universal adversarial perturbations

    Get PDF
    Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, a single perturbation known as the universal adversarial perturbation (UAP) can foil most classification tasks conducted by DNNs. Thus, different methods for generating UAPs are required to fully evaluate the vulnerability of DNNs. A realistic evaluation would be with cases that consider targeted attacks; wherein the generated UAP causes the DNN to classify an input into a specific class. However, the development of UAPs for targeted attacks has largely fallen behind that of UAPs for non-targeted attacks. Therefore, we propose a simple iterative method to generate UAPs for targeted attacks. Our method combines the simple iterative method for generating non-targeted UAPs and the fast gradient sign method for generating a targeted adversarial perturbation for an input. We applied the proposed method to state-of-the-art DNN models for image classification and proved the existence of almost imperceptible UAPs for targeted attacks; further, we demonstrated that such UAPs can be easily generated

    Difficulty in inferring microbial community structure based on co-occurrence network approaches

    Get PDF
    Background Co-occurrence networks—ecological associations between sampled populations of microbial communities inferred from taxonomic composition data obtained from high-throughput sequencing techniques—are widely used in microbial ecology. Several co-occurrence network methods have been proposed. Co-occurrence network methods only infer ecological associations and are often used to discuss species interactions. However, validity of this application of co-occurrence network methods is currently debated. In particular, they simply evaluate using parametric statistical models, even though microbial compositions are determined through population dynamics. Results We comprehensively evaluated the validity of common methods for inferring microbial ecological networks through realistic simulations. We evaluated how correctly nine widely used methods describe interaction patterns in ecological communities. Contrary to previous studies, the performance of the co-occurrence network methods on compositional data was almost equal to or less than that of classical methods (e.g., Pearson’s correlation). The methods described the interaction patterns in dense and/or heterogeneous networks rather inadequately. Co-occurrence network performance also depended upon interaction types; specifically, the interaction patterns in competitive communities were relatively accurately predicted while those in predator–prey (parasitic) communities were relatively inadequately predicted. Conclusions Our findings indicated that co-occurrence network approaches may be insufficient in interpreting species interactions in microbiome studies. However, the results do not diminish the importance of these approaches. Rather, they highlight the need for further careful evaluation of the validity of these much-used methods and the development of more suitable methods for inferring microbial ecological networks

    Natural images allow universal adversarial attacks on medical image classification using deep neural networks with transfer learning

    Get PDF
    Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training datasets (medical images), which are often required for adversarial attacks, are generally unavailable in terms of security and privacy preservation. Nevertheless, in this study, we demonstrated that adversarial attacks are also possible using natural images for medical DNN models with transfer learning, even if such medical images are unavailable; in particular, we showed that universal adversarial perturbations (UAPs) can also be generated from natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs to natural images is expected to become a significant security threat

    Universal adversarial attacks on deep neural networks for medical image classification

    Get PDF
    Deep neural networks (DNNs) are widely investigated in medical image classification to achieve automated support for clinical diagnosis. It is necessary to evaluate the robustness of medical DNN tasks against adversarial attacks, as high-stake decision-making will be made based on the diagnosis. Several previous studies have considered simple adversarial attacks. However, the vulnerability of DNNs to more realistic and higher risk attacks, such as universal adversarial perturbation (UAP), which is a single perturbation that can induce DNN failure in most classification tasks has not been evaluated yet

    Spontaneous Bilateral Pneumothorax in a Patient with Anorexia Nervosa: The Management of Prolonged Postoperative Air Leakage

    Get PDF
    A 24-year-old Japanese female with anorexia nervosa presented to our hospital for bilateral pneumothorax, and 12-Fr thoracostomy catheters were inserted into the bilateral pleural cavities. On hospital day 9, a thoracoscopic bullectomy was performed. However, air leakage relapsed on both sides on postoperative day 1. The air leakage on the right side was particularly persistent, and we switched the drainage to a Heimlich valve. Both lungs expanded gradually and the chest tube was removed on postoperative day 19. Passive pleural drainage might be an option for prolonged air leakage after a bullectomy in patients with anorexia nervosa
    corecore