140 research outputs found

    Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning

    Full text link
    Visual language grounding is widely studied in modern neural image captioning systems, which typically adopts an encoder-decoder framework consisting of two principal components: a convolutional neural network (CNN) for image feature extraction and a recurrent neural network (RNN) for language caption generation. To study the robustness of language grounding to adversarial perturbations in machine vision and perception, we propose Show-and-Fool, a novel algorithm for crafting adversarial examples in neural image captioning. The proposed algorithm provides two evaluation approaches, which check whether neural image captioning systems can be mislead to output some randomly chosen captions or keywords. Our extensive experiments show that our algorithm can successfully craft visually-similar adversarial examples with randomly targeted captions or keywords, and the adversarial examples can be made highly transferable to other image captioning systems. Consequently, our approach leads to new robustness implications of neural image captioning and novel insights in visual language grounding.Comment: Accepted by 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). Hongge Chen and Huan Zhang contribute equally to this wor

    ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models

    Full text link
    Deep neural networks (DNNs) are one of the most prominent technologies of our time, as they achieve state-of-the-art performance in many machine learning tasks, including but not limited to image classification, text mining, and speech processing. However, recent research on DNNs has indicated ever-increasing concern on the robustness to adversarial examples, especially for security-critical tasks such as traffic sign identification for autonomous driving. Studies have unveiled the vulnerability of a well-trained DNN by demonstrating the ability of generating barely noticeable (to both human and machines) adversarial images that lead to misclassification. Furthermore, researchers have shown that these adversarial images are highly transferable by simply training and attacking a substitute model built upon the target model, known as a black-box attack to DNNs. Similar to the setting of training substitute models, in this paper we propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. However, different from leveraging attack transferability from substitute models, we propose zeroth order optimization (ZOO) based attacks to directly estimate the gradients of the targeted DNN for generating adversarial examples. We use zeroth order stochastic coordinate descent along with dimension reduction, hierarchical attack and importance sampling techniques to efficiently attack black-box models. By exploiting zeroth order optimization, improved attacks to the targeted DNN can be accomplished, sparing the need for training substitute models and avoiding the loss in attack transferability. Experimental results on MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective as the state-of-the-art white-box attack and significantly outperforms existing black-box attacks via substitute models.Comment: Accepted by 10th ACM Workshop on Artificial Intelligence and Security (AISEC) with the 24th ACM Conference on Computer and Communications Security (CCS

    Efficient Neural Network Robustness Certification with General Activation Functions

    Full text link
    Finding minimum distortion of adversarial examples and thus certifying robustness in neural network classifiers for given data points is known to be a challenging problem. Nevertheless, recently it has been shown to be possible to give a non-trivial certified lower bound of minimum adversarial distortion, and some recent progress has been made towards this direction by exploiting the piece-wise linear nature of ReLU activations. However, a generic robustness certification for general activation functions still remains largely unexplored. To address this issue, in this paper we introduce CROWN, a general framework to certify robustness of neural networks with general activation functions for given input data points. The novelty in our algorithm consists of bounding a given activation function with linear and quadratic functions, hence allowing it to tackle general activation functions including but not limited to four popular choices: ReLU, tanh, sigmoid and arctan. In addition, we facilitate the search for a tighter certified lower bound by adaptively selecting appropriate surrogates for each neuron activation. Experimental results show that CROWN on ReLU networks can notably improve the certified lower bounds compared to the current state-of-the-art algorithm Fast-Lin, while having comparable computational efficiency. Furthermore, CROWN also demonstrates its effectiveness and flexibility on networks with general activation functions, including tanh, sigmoid and arctan.Comment: Accepted by NIPS 2018. Huan Zhang and Tsui-Wei Weng contributed equall

    Measurement incompatibility cannot be stochastically distilled

    Full text link
    We show that the incompatibility of a set of measurements cannot be increased by subjecting them to a filter, namely, by combining them with a device that post-selects the incoming states on a fixed outcome of a stochastic transformation. This result holds for several measures of incompatibility, such as those based on robustness and convex weight. Expanding these ideas to Einstein-Podolsky-Rosen steering experiments, we are able to solve the problem of the maximum steerability obtained with respect to the most general local filters in a way that allows for an explicit calculation of the filter operation. Moreover, our results generalize to nonphysical maps, i.e., positive but not completely positive linear maps.Comment: 12 pages, 1 figure, comments welcom

    Complete classification of steerability under local filters and its relation with measurement incompatibility

    Get PDF
    Quantum steering is a central resource for one-sided device-independent quantum information. It is manipulated via one-way local operations and classical communication, such as local filtering on the trusted party. Here, we provide a necessary and sufficient condition for a steering assemblage to be transformable into another via local filtering. We characterize the equivalence classes with respect to filters in terms of the steering equivalent observables (SEO), first proposed to connect the problem of steerability and measurement incompatibility. We provide an efficient method to compute the extractable steerability that is maximal via local filters and show that it coincides with the incompatibility of the SEO. Moreover, we show that there always exists a bipartite state that provides an assemblage with steerability equal to the incompatibility of the measurements on the untrusted party. Finally, we investigate the optimal success probability and rates for transformation protocols (distillation and dilution) in the single-shot scenario together with examples

    High expression FUT1 and B3GALT5 is an independent predictor of postoperative recurrence and survival in hepatocellular carcinoma.

    Get PDF
    Cancer may arise from dedifferentiation of mature cells or maturation-arrested stem cells. Previously we reported that definitive endoderm from which liver was derived, expressed Globo H, SSEA-3 and SSEA-4. In this study, we examined the expression of their biosynthetic enzymes, FUT1, FUT2, B3GALT5 and ST3GAL2, in 135 hepatocellular carcinoma (HCC) tissues by qRT-PCR. High expression of either FUT1 or B3GALT5 was significantly associated with advanced stages and poor outcome. Kaplan Meier survival analysis showed significantly shorter relapse-free survival (RFS) for those with high expression of either FUT1 or B3GALT5 (P = 0.024 and 0.001, respectively) and shorter overall survival (OS) for those with high expression of B3GALT5 (P = 0.017). Combination of FUT1 and B3GALT5 revealed that high expression of both genes had poorer RFS and OS than the others (P < 0.001). Moreover, multivariable Cox regression analysis identified the combination of B3GALT5 and FUT1 as an independent predictor for RFS (HR: 2.370, 95% CI: 1.505-3.731, P < 0.001) and OS (HR: 2.153, 95% CI: 1.188-3.902, P = 0.012) in HCC. In addition, the presence of Globo H, SSEA-3 and SSEA-4 in some HCC tissues and their absence in normal liver was established by immunohistochemistry staining and mass spectrometric analysis

    Are AlphaZero-like Agents Robust to Adversarial Perturbations?

    Full text link
    The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. Given that the state space of Go is extremely large and a human player can play the game from any legal state, we ask whether adversarial states exist for Go AIs that may lead them to play surprisingly wrong actions. In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are ``semantically'' equivalent to the original state by adding meaningless moves to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for Go beginners. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that the actions taken by both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58\% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players and 90\% of examples indeed lead the Go agent to play an obviously inferior action. Our code is available at \url{https://PaperCode.cc/GoAttack}.Comment: Accepted by Neurips 202

    Galectin-3 Modulates Th17 Responses by Regulating Dendritic Cell Cytokines

    Get PDF
    Galectin-3 is a β-galactoside–binding animal lectin with diverse functions, including regulation of T helper (Th) 1 and Th2 responses. Current data indicate that galectin-3 expressed in dendritic cells (DCs) may be contributory. Th17 cells have emerged as critical inducers of tissue inflammation in autoimmune disease and important mediators of host defense against fungal pathogens, although little is known about galectin-3 involvement in Th17 development. We investigated the role of galectin-3 in the induction of Th17 immunity in galectin-3–deficient (gal3−/−) and gal3+/+ mouse bone marrow–derived DCs. We demonstrate that intracellular galectin-3 negatively regulates Th17 polarization in response to the dectin-1 agonist curdlan (a β-glucan present on the cell wall of fungal species) and lipopolysaccharide, agents that prime DCs for Th17 differentiation. On activation of dectin-1, gal3−/− DCs secreted higher levels of the Th17-axis cytokine IL-23 compared with gal3+/+ DCs and contained higher levels of activated c-Rel, an NF-κB subunit that promotes IL-23 expression. Levels of active Raf-1, a kinase that participates in downstream inhibition of c-Rel binding to the IL23A promoter, were impaired in gal3−/− DCs. Modulation of Th17 by galectin-3 in DCs also occurred in vivo because adoptive transfer of gal3−/− DCs exposed to Candida albicans conferred higher Th17 responses and protection against fungal infection. We conclude that galectin-3 suppresses Th17 responses by regulating DC cytokine production
    corecore