2,314 research outputs found
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
Machine Learning (ML) algorithms are used to train computers to perform a
variety of complex tasks and improve with experience. Computers learn how to
recognize patterns, make unintended decisions, or react to a dynamic
environment. Certain trained machines may be more effective than others because
they are based on more suitable ML algorithms or because they were trained
through superior training sets. Although ML algorithms are known and publicly
released, training sets may not be reasonably ascertainable and, indeed, may be
guarded as trade secrets. While much research has been performed about the
privacy of the elements of training sets, in this paper we focus our attention
on ML classifiers and on the statistical information that can be unconsciously
or maliciously revealed from them. We show that it is possible to infer
unexpected but useful information from ML classifiers. In particular, we build
a novel meta-classifier and train it to hack other classifiers, obtaining
meaningful information about their training sets. This kind of information
leakage can be exploited, for example, by a vendor to build more effective
classifiers or to simply acquire trade secrets from a competitor's apparatus,
potentially violating its intellectual property rights
Detection of Lying Electrical Vehicles in Charging Coordination Application Using Deep Learning
The simultaneous charging of many electric vehicles (EVs) stresses the
distribution system and may cause grid instability in severe cases. The best
way to avoid this problem is by charging coordination. The idea is that the EVs
should report data (such as state-of-charge (SoC) of the battery) to run a
mechanism to prioritize the charging requests and select the EVs that should
charge during this time slot and defer other requests to future time slots.
However, EVs may lie and send false data to receive high charging priority
illegally. In this paper, we first study this attack to evaluate the gains of
the lying EVs and how their behavior impacts the honest EVs and the performance
of charging coordination mechanism. Our evaluations indicate that lying EVs
have a greater chance to get charged comparing to honest EVs and they degrade
the performance of the charging coordination mechanism. Then, an anomaly based
detector that is using deep neural networks (DNN) is devised to identify the
lying EVs. To do that, we first create an honest dataset for charging
coordination application using real driving traces and information revealed by
EV manufacturers, and then we also propose a number of attacks to create
malicious data. We trained and evaluated two models, which are the multi-layer
perceptron (MLP) and the gated recurrent unit (GRU) using this dataset and the
GRU detector gives better results. Our evaluations indicate that our detector
can detect lying EVs with high accuracy and low false positive rate
- …