2,893 research outputs found
Attacking Visual Language Grounding with Adversarial Examples: A Case Study on Neural Image Captioning
Visual language grounding is widely studied in modern neural image captioning
systems, which typically adopts an encoder-decoder framework consisting of two
principal components: a convolutional neural network (CNN) for image feature
extraction and a recurrent neural network (RNN) for language caption
generation. To study the robustness of language grounding to adversarial
perturbations in machine vision and perception, we propose Show-and-Fool, a
novel algorithm for crafting adversarial examples in neural image captioning.
The proposed algorithm provides two evaluation approaches, which check whether
neural image captioning systems can be mislead to output some randomly chosen
captions or keywords. Our extensive experiments show that our algorithm can
successfully craft visually-similar adversarial examples with randomly targeted
captions or keywords, and the adversarial examples can be made highly
transferable to other image captioning systems. Consequently, our approach
leads to new robustness implications of neural image captioning and novel
insights in visual language grounding.Comment: Accepted by 56th Annual Meeting of the Association for Computational
Linguistics (ACL 2018). Hongge Chen and Huan Zhang contribute equally to this
wor
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
The prediction accuracy has been the long-lasting and sole standard for
comparing the performance of different image classification models, including
the ImageNet competition. However, recent studies have highlighted the lack of
robustness in well-trained deep neural networks to adversarial examples.
Visually imperceptible perturbations to natural images can easily be crafted
and mislead the image classifiers towards misclassification. To demystify the
trade-offs between robustness and accuracy, in this paper we thoroughly
benchmark 18 ImageNet models using multiple robustness metrics, including the
distortion, success rate and transferability of adversarial examples between
306 pairs of models. Our extensive experimental results reveal several new
insights: (1) linear scaling law - the empirical and
distortion metrics scale linearly with the logarithm of classification error;
(2) model architecture is a more critical factor to robustness than model size,
and the disclosed accuracy-robustness Pareto frontier can be used as an
evaluation criterion for ImageNet model designers; (3) for a similar network
architecture, increasing network depth slightly improves robustness in
distortion; (4) there exist models (in VGG family) that exhibit
high adversarial transferability, while most adversarial examples crafted from
one model can only be transferred within the same family. Experiment code is
publicly available at \url{https://github.com/huanzhang12/Adversarial_Survey}.Comment: Accepted by the European Conference on Computer Vision (ECCV) 201
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
Discovering and exploiting the causality in deep neural networks (DNNs) are
crucial challenges for understanding and reasoning causal effects (CE) on an
explainable visual model. "Intervention" has been widely used for recognizing a
causal relation ontologically. In this paper, we propose a causal inference
framework for visual reasoning via do-calculus. To study the intervention
effects on pixel-level features for causal reasoning, we introduce pixel-wise
masking and adversarial perturbation. In our framework, CE is calculated using
features in a latent space and perturbed prediction from a DNN-based model. We
further provide the first look into the characteristics of discovered CE of
adversarially perturbed images generated by gradient-based methods
\footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}.
Experimental results show that CE is a competitive and robust index for
understanding DNNs when compared with conventional methods such as
class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for
human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds
promises for detecting adversarial examples as it possesses distinct
characteristics in the presence of adversarial perturbations.Comment: Noted our camera-ready version has changed the title. "When Causal
Intervention Meets Adversarial Examples and Image Masking for Deep Neural
Networks" as the v3 official paper title in IEEE Proceeding. Please use it in
your formal reference. Accepted at IEEE ICIP 2019. Pytorch code has released
on https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvIm
Using Pattern Recognition for Investment Decision Support in Taiwan Stock Market
In Taiwan stock market, it has been accumulated large amounts of time series stock data and successful investment strategies. The stock price, which is impacted by various factors, is the result of buyer-seller investment strategies. Since the stock price reflects numerous factors, its pattern can be described as the strategies of investors.
In this paper, pattern recognition concept is adapted to match the current stock price trend with the repeatedly appearing past price data. Accordingly, a new method is introduced in this research that extracting features quickly from stock time series chart to find out the most critical feature points. The matching can be processed via the corresponding information of the feature points. In other words, the goal is to seek for the historical repeatedly appearing patterns, namely the similar trend, offering the investors to make investment strategies
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Deep neural networks (DNNs) are one of the most prominent technologies of our
time, as they achieve state-of-the-art performance in many machine learning
tasks, including but not limited to image classification, text mining, and
speech processing. However, recent research on DNNs has indicated
ever-increasing concern on the robustness to adversarial examples, especially
for security-critical tasks such as traffic sign identification for autonomous
driving. Studies have unveiled the vulnerability of a well-trained DNN by
demonstrating the ability of generating barely noticeable (to both human and
machines) adversarial images that lead to misclassification. Furthermore,
researchers have shown that these adversarial images are highly transferable by
simply training and attacking a substitute model built upon the target model,
known as a black-box attack to DNNs.
Similar to the setting of training substitute models, in this paper we
propose an effective black-box attack that also only has access to the input
(images) and the output (confidence scores) of a targeted DNN. However,
different from leveraging attack transferability from substitute models, we
propose zeroth order optimization (ZOO) based attacks to directly estimate the
gradients of the targeted DNN for generating adversarial examples. We use
zeroth order stochastic coordinate descent along with dimension reduction,
hierarchical attack and importance sampling techniques to efficiently attack
black-box models. By exploiting zeroth order optimization, improved attacks to
the targeted DNN can be accomplished, sparing the need for training substitute
models and avoiding the loss in attack transferability. Experimental results on
MNIST, CIFAR10 and ImageNet show that the proposed ZOO attack is as effective
as the state-of-the-art white-box attack and significantly outperforms existing
black-box attacks via substitute models.Comment: Accepted by 10th ACM Workshop on Artificial Intelligence and Security
(AISEC) with the 24th ACM Conference on Computer and Communications Security
(CCS
An Behavioral Finance Analysis Using Learning Vector Quantization in the Taiwan Stock Market Index Future
There are various types of trading behavior in the stock market. And the buying or selling activities in many investment strategies are influenced by numerous factors respectively, such as fundamental analysis, macroeconomic analysis, and news analysis. Consequently, various factors will reflect on market price. Random Walk in financial engineering is not the focus in this paper. Otherwise, the importance of the technique analysis about Taiwan Stock Index Futures will be emphasized in this research.
It is the intention of this paper to investigate the information content of Open, High, Low, Close prices in the previous trading day and relative higher and lower points in the prior period of the current trading day, as well as their prices in analyzing Taiwan Stock Index Future. The predictability of Learning Vector Quantizationl Network can clearly be seen from the empirical result
- …