38,785 research outputs found
Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning
Recent studies have shown that attackers can force deep learning models to
misclassify so-called "adversarial examples": maliciously generated images
formed by making imperceptible modifications to pixel values. With growing
interest in deep learning for security applications, it is important for
security experts and users of machine learning to recognize how learning
systems may be attacked. Due to the complex nature of deep learning, it is
challenging to understand how deep models can be fooled by adversarial
examples. Thus, we present a web-based visualization tool,
Adversarial-Playground, to demonstrate the efficacy of common adversarial
methods against a convolutional neural network (CNN) system.
Adversarial-Playground is educational, modular and interactive. (1) It enables
non-experts to compare examples visually and to understand why an adversarial
example can fool a CNN-based image classifier. (2) It can help security experts
explore more vulnerability of deep learning as a software module. (3) Building
an interactive visualization is challenging in this domain due to the large
feature space of image classification (generating adversarial examples is slow
in general and visualizing images are costly). Through multiple novel design
choices, our tool can provide fast and accurate responses to user requests.
Empirically, we find that our client-server division strategy reduced the
response time by an average of 1.5 seconds per sample. Our other innovation, a
faster variant of JSMA evasion algorithm, empirically performed twice as fast
as JSMA and yet maintains a comparable evasion rate.
Project source code and data from our experiments available at:
https://github.com/QData/AdversarialDNN-PlaygroundComment: 5 pages. {I.2.6}{Artificial Intelligence} ; {K.6.5}{Management of
Computing and Information Systems}{Security and Protection}. arXiv admin
note: substantial text overlap with arXiv:1706.0176
Mining Frequency of Drug Side Effects Over a Large Twitter Dataset Using Apache Spark
Despite clinical trials by pharmaceutical companies as well as current FDA reporting systems, there are still drug side effects that have not been caught. To find a larger sample of reports, a possible way is to mine online social media. With its current widespread use, social media such as Twitter has given rise to massive amounts of data, which can be used as reports for drug side effects. To process these large datasets, Apache Spark has become popular for fast, distributed batch processing. In this work, we have improved on previous pipelines in sentimental analysis-based mining, processing, and extracting tweets with drug-caused side effects. We have also added a new ensemble classifier using a combination of sentiment analysis features to increase the accuracy of identifying drug-caused side effects. In addition, the frequency count for the side effects is also provided. Furthermore, we have also implemented the same pipeline in Apache Spark to improve the speed of processing of tweets by 2.5 times, as well as to support the process of large tweet datasets. As the frequency count of drug side effects opens a wide door for further analysis, we present a preliminary study on this issue, including the side effects of simultaneously using two drugs, and the potential danger of using less-common combination of drugs. We believe the pipeline design and the results present in this work would have great implication on studying drug side effects and on big data analysis in general
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals
An electroencephalography (EEG) based Brain Computer Interface (BCI) enables
people to communicate with the outside world by interpreting the EEG signals of
their brains to interact with devices such as wheelchairs and intelligent
robots. More specifically, motor imagery EEG (MI-EEG), which reflects a
subjects active intent, is attracting increasing attention for a variety of BCI
applications. Accurate classification of MI-EEG signals while essential for
effective operation of BCI systems, is challenging due to the significant noise
inherent in the signals and the lack of informative correlation between the
signals and brain activities. In this paper, we propose a novel deep neural
network based learning framework that affords perceptive insights into the
relationship between the MI-EEG data and brain activities. We design a joint
convolutional recurrent neural network that simultaneously learns robust
high-level feature presentations through low-dimensional dense embeddings from
raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various
artifacts such as background activities. The proposed approach has been
evaluated extensively on a large- scale public MI-EEG dataset and a limited but
easy-to-deploy dataset collected in our lab. The results show that our approach
outperforms a series of baselines and the competitive state-of-the- art
methods, yielding a classification accuracy of 95.53%. The applicability of our
proposed approach is further demonstrated with a practical BCI system for
typing.Comment: 10 page
Deep Cross-Modal Audio-Visual Generation
Cross-modal audio-visual perception has been a long-lasting topic in
psychology and neurology, and various studies have discovered strong
correlations in human perception of auditory and visual stimuli. Despite works
in computational multimodal modeling, the problem of cross-modal audio-visual
generation has not been systematically studied in the literature. In this
paper, we make the first attempt to solve this cross-modal generation problem
leveraging the power of deep generative adversarial training. Specifically, we
use conditional generative adversarial networks to achieve cross-modal
audio-visual generation of musical performances. We explore different encoding
methods for audio and visual signals, and work on two scenarios:
instrument-oriented generation and pose-oriented generation. Being the first to
explore this new problem, we compose two new datasets with pairs of images and
sounds of musical performances of different instruments. Our experiments using
both classification and human evaluations demonstrate that our model has the
ability to generate one modality, i.e., audio/visual, from the other modality,
i.e., visual/audio, to a good extent. Our experiments on various design choices
along with the datasets will facilitate future research in this new problem
space
Automating biomedical data science through tree-based pipeline optimization
Over the past decade, data science and machine learning has grown from a
mysterious art form to a staple tool across a variety of fields in academia,
business, and government. In this paper, we introduce the concept of tree-based
pipeline optimization for automating one of the most tedious parts of machine
learning---pipeline design. We implement a Tree-based Pipeline Optimization
Tool (TPOT) and demonstrate its effectiveness on a series of simulated and
real-world genetic data sets. In particular, we show that TPOT can build
machine learning pipelines that achieve competitive classification accuracy and
discover novel pipeline operators---such as synthetic feature
constructors---that significantly improve classification accuracy on these data
sets. We also highlight the current challenges to pipeline optimization, such
as the tendency to produce pipelines that overfit the data, and suggest future
research paths to overcome these challenges. As such, this work represents an
early step toward fully automating machine learning pipeline design.Comment: 16 pages, 5 figures, to appear in EvoBIO 2016 proceeding
- …