6,076 research outputs found
Security Enhancement by Identifying Attacks Using Machine Learning for 5G Network
Need of security enhancement for 5G network has been increased in last decade. Data transmitted over network need to be secure from external attacks. Thus there is need to enhance the security during data transmission over 5G network. There remains different security system that focus on identification of attacks. In order to identify attack different machine learning mechanism are considered. But the issue with existing research work is limited security and performance issue. There remains need to enhance security of 5G network. To achieve this objective hybrid mechanism are introduced. Different treats such as Denial-of-Service, Denial-of-Detection, Unfair use or resources are classified using enhanced machine learning approach. Proposed work has make use of LSTM model to improve accuracy during decision making and classification of attack of 5G network. Research work is considering accuracy parameters such as Recall, precision and F-Score to assure the reliability of proposed model. Simulation results conclude that proposed model is providing better accuracy as compared to conventional model
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Present attack methods can make state-of-the-art classification systems based
on deep neural networks misclassify every adversarially modified test example.
The design of general defense strategies against a wide range of such attacks
still remains a challenging problem. In this paper, we draw inspiration from
the fields of cybersecurity and multi-agent systems and propose to leverage the
concept of Moving Target Defense (MTD) in designing a meta-defense for
'boosting' the robustness of an ensemble of deep neural networks (DNNs) for
visual classification tasks against such adversarial attacks. To classify an
input image, a trained network is picked randomly from this set of networks by
formulating the interaction between a Defender (who hosts the classification
networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg
Game (BSG). We empirically show that this approach, MTDeep, reduces
misclassification on perturbed images in various datasets such as MNIST,
FashionMNIST, and ImageNet while maintaining high classification accuracy on
legitimate test images. We then demonstrate that our framework, being the first
meta-defense technique, can be used in conjunction with any existing defense
mechanism to provide more resilience against adversarial attacks that can be
afforded by these defense mechanisms. Lastly, to quantify the increase in
robustness of an ensemble-based classification system when we use MTDeep, we
analyze the properties of a set of DNNs and introduce the concept of
differential immunity that formalizes the notion of attack transferability.Comment: Accepted to the Conference on Decision and Game Theory for Security
(GameSec), 201
Machine learning and blockchain technologies for cybersecurity in connected vehicles
Future connected and autonomous vehicles (CAVs) must be secured againstcyberattacks for their everyday functions on the road so that safety of passengersand vehicles can be ensured. This article presents a holistic review of cybersecurityattacks on sensors and threats regardingmulti-modal sensor fusion. A compre-hensive review of cyberattacks on intra-vehicle and inter-vehicle communicationsis presented afterward. Besides the analysis of conventional cybersecurity threatsand countermeasures for CAV systems,a detailed review of modern machinelearning, federated learning, and blockchain approach is also conducted to safe-guard CAVs. Machine learning and data mining-aided intrusion detection systemsand other countermeasures dealing with these challenges are elaborated at theend of the related section. In the last section, research challenges and future direc-tions are identified
Machine Learning Models that Remember Too Much
Machine learning (ML) is becoming a commodity. Numerous ML frameworks and
services are available to data holders who are not ML experts but want to train
predictive models on their data. It is important that ML models trained on
sensitive inputs (e.g., personal images or documents) not leak too much
information about the training data.
We consider a malicious ML provider who supplies model-training code to the
data holder, does not observe the training, but then obtains white- or
black-box access to the resulting model. In this setting, we design and
implement practical algorithms, some of them very similar to standard ML
techniques such as regularization and data augmentation, that "memorize"
information about the training dataset in the model yet the model is as
accurate and predictive as a conventionally trained model. We then explain how
the adversary can extract memorized information from the model.
We evaluate our techniques on standard ML tasks for image classification
(CIFAR10), face recognition (LFW and FaceScrub), and text analysis (20
Newsgroups and IMDB). In all cases, we show how our algorithms create models
that have high predictive power yet allow accurate extraction of subsets of
their training data
- …