8,546 research outputs found
Detection of Lying Electrical Vehicles in Charging Coordination Application Using Deep Learning
The simultaneous charging of many electric vehicles (EVs) stresses the
distribution system and may cause grid instability in severe cases. The best
way to avoid this problem is by charging coordination. The idea is that the EVs
should report data (such as state-of-charge (SoC) of the battery) to run a
mechanism to prioritize the charging requests and select the EVs that should
charge during this time slot and defer other requests to future time slots.
However, EVs may lie and send false data to receive high charging priority
illegally. In this paper, we first study this attack to evaluate the gains of
the lying EVs and how their behavior impacts the honest EVs and the performance
of charging coordination mechanism. Our evaluations indicate that lying EVs
have a greater chance to get charged comparing to honest EVs and they degrade
the performance of the charging coordination mechanism. Then, an anomaly based
detector that is using deep neural networks (DNN) is devised to identify the
lying EVs. To do that, we first create an honest dataset for charging
coordination application using real driving traces and information revealed by
EV manufacturers, and then we also propose a number of attacks to create
malicious data. We trained and evaluated two models, which are the multi-layer
perceptron (MLP) and the gated recurrent unit (GRU) using this dataset and the
GRU detector gives better results. Our evaluations indicate that our detector
can detect lying EVs with high accuracy and low false positive rate
A Secure Federated Data-Driven Evolutionary Multi-objective Optimization Algorithm
Data-driven evolutionary algorithms usually aim to exploit the information
behind a limited amount of data to perform optimization, which have proved to
be successful in solving many complex real-world optimization problems.
However, most data-driven evolutionary algorithms are centralized, causing
privacy and security concerns. Existing federated Bayesian algorithms and
data-driven evolutionary algorithms mainly protect the raw data on each client.
To address this issue, this paper proposes a secure federated data-driven
evolutionary multi-objective optimization algorithm to protect both the raw
data and the newly infilled solutions obtained by optimizing the acquisition
function conducted on the server. We select the query points on a randomly
selected client at each round of surrogate update by calculating the
acquisition function values of the unobserved points on this client, thereby
reducing the risk of leaking the information about the solution to be sampled.
In addition, since the predicted objective values of each client may contain
sensitive information, we mask the objective values with Diffie-Hellmann-based
noise, and then send only the masked objective values of other clients to the
selected client via the server. Since the calculation of the acquisition
function also requires both the predicted objective value and the uncertainty
of the prediction, the predicted mean objective and uncertainty are normalized
to reduce the influence of noise. Experimental results on a set of widely used
multi-objective optimization benchmarks show that the proposed algorithm can
protect privacy and enhance security with only negligible sacrifice in the
performance of federated data-driven evolutionary optimization.Comment: This paper has been accepted by IEEE Transactions on Emerging Topics
in Computational Intelligence journa
Evolutionary tree-based quasi identifier and federated gradient privacy preservations over big healthcare data
Big data has remodeled the way organizations supervise, examine and leverage data in any industry. To safeguard sensitive data from public contraventions, several countries investigated this issue and carried out privacy protection mechanism. With the aid of quasi-identifiers privacy is not said to be preserved to a greater extent. This paper proposes a method called evolutionary tree-based quasi-identifier and federated gradient (ETQI-FD) for privacy preservations over big healthcare data. The first step involved in the ETQI-FD is learning quasi-identifiers. Learning quasi-identifiers by employing information loss function separately for categorical and numerical attributes accomplishes both the largest dissimilarities and partition without a comprehensive exploration between tuples of features or attributes. Next with the learnt quasi-identifiers, privacy preservation of data item is made by applying federated gradient arbitrary privacy preservation learning model. This model attains optimal balance between privacy and accuracy. In the federated gradient privacy preservation learning model, we evaluate the determinant of each attribute to the outputs. Then injecting Adaptive Lorentz noise to data attributes our ETQI-FD significantly minimizes the influence of noise on the final results and therefore contributing to privacy and accuracy. An experimental evaluation of ETQI-FD method achieves better accuracy and privacy than the existing methods
- …