Time series classification models have been garnering significant importance
in the research community. However, not much research has been done on
generating adversarial samples for these models. These adversarial samples can
become a security concern. In this paper, we propose utilizing an adversarial
transformation network (ATN) on a distilled model to attack various time series
classification models. The proposed attack on the classification model utilizes
a distilled model as a surrogate that mimics the behavior of the attacked
classical time series classification models. Our proposed methodology is
applied onto 1-Nearest Neighbor Dynamic Time Warping (1-NN ) DTW, a Fully
Connected Network and a Fully Convolutional Network (FCN), all of which are
trained on 42 University of California Riverside (UCR) datasets. In this paper,
we show both models were susceptible to attacks on all 42 datasets. To the best
of our knowledge, such an attack on time series classification models has never
been done before. Finally, we recommend future researchers that develop time
series classification models to incorporating adversarial data samples into
their training data sets to improve resilience on adversarial samples and to
consider model robustness as an evaluative metric.Comment: 13 pages, 7 figures, 6 table