1,036 research outputs found

    Security challenges of small cell as a service in virtualized mobile edge computing environments

    Get PDF
    Research on next-generation 5G wireless networks is currently attracting a lot of attention in both academia and industry. While 5G development and standardization activities are still at their early stage, it is widely acknowledged that 5G systems are going to extensively rely on dense small cell deployments, which would exploit infrastructure and network functions virtualization (NFV), and push the network intelligence towards network edges by embracing the concept of mobile edge computing (MEC). As security will be a fundamental enabling factor of small cell as a service (SCaaS) in 5G networks, we present the most prominent threats and vulnerabilities against a broad range of targets. As far as the related work is concerned, to the best of our knowledge, this paper is the first to investigate security challenges at the intersection of SCaaS, NFV, and MEC. It is also the first paper that proposes a set of criteria to facilitate a clear and effective taxonomy of security challenges of main elements of 5G networks. Our analysis can serve as a staring point towards the development of appropriate 5G security solutions. These will have crucial effect on legal and regulatory frameworks as well as on decisions of businesses, governments, and end-users

    Obfuscation of Malicious Behaviors for Thwarting Masquerade Detection Systems Based on Locality Features

    Get PDF
    In recent years, dynamic user verification has become one of the basic pillars for insider threat detection. From these threats, the research presented in this paper focuses on masquerader attacks, a category of insiders characterized by being intentionally conducted by persons outside the organization that somehow were able to impersonate legitimate users. Consequently, it is assumed that masqueraders are unaware of the protected environment within the targeted organization, so it is expected that they move in a more erratic manner than legitimate users along the compromised systems. This feature makes them susceptible to being discovered by dynamic user verification methods based on user profiling and anomaly-based intrusion detection. However, these approaches are susceptible to evasion through the imitation of the normal legitimate usage of the protected system (mimicry), which is being widely exploited by intruders. In order to contribute to their understanding, as well as anticipating their evolution, the conducted research focuses on the study of mimicry from the standpoint of an uncharted terrain: the masquerade detection based on analyzing locality traits. With this purpose, the problem is widely stated, and a pair of novel obfuscation methods are introduced: locality-based mimicry by action pruning and locality-based mimicry by noise generation. Their modus operandi, effectiveness, and impact are evaluated by a collection of well-known classifiers typically implemented for masquerade detection. The simplicity and effectiveness demonstrated suggest that they entail attack vectors that should be taken into consideration for the proper hardening of real organizations

    Machine Learning Threatens 5G Security

    Get PDF
    Machine learning (ML) is expected to solve many challenges in the fifth generation (5G) of mobile networks. However, ML will also open the network to several serious cybersecurity vulnerabilities. Most of the learning in ML happens through data gathered from the environment. Un-scrutinized data will have serious consequences on machines absorbing the data to produce actionable intelligence for the network. Scrutinizing the data, on the other hand, opens privacy challenges. Unfortunately, most of the ML systems are borrowed from other disciplines that provide excellent results in small closed environments. The resulting deployment of such ML systems in 5G can inadvertently open the network to serious security challenges such as unfair use of resources, denial of service, as well as leakage of private and confidential information. Therefore, in this article we dig into the weaknesses of the most prominent ML systems that are currently vigorously researched for deployment in 5G. We further classify and survey solutions for avoiding such pitfalls of ML in 5G systems

    SemProtector: A Unified Framework for Semantic Protection in Deep Learning-based Semantic Communication Systems

    Full text link
    Recently proliferated semantic communications (SC) aim at effectively transmitting the semantics conveyed by the source and accurately interpreting the meaning at the destination. While such a paradigm holds the promise of making wireless communications more intelligent, it also suffers from severe semantic security issues, such as eavesdropping, privacy leaking, and spoofing, due to the open nature of wireless channels and the fragility of neural modules. Previous works focus more on the robustness of SC via offline adversarial training of the whole system, while online semantic protection, a more practical setting in the real world, is still largely under-explored. To this end, we present SemProtector, a unified framework that aims to secure an online SC system with three hot-pluggable semantic protection modules. Specifically, these protection modules are able to encrypt semantics to be transmitted by an encryption method, mitigate privacy risks from wireless channels by a perturbation mechanism, and calibrate distorted semantics at the destination by a semantic signature generation method. Our framework enables an existing online SC system to dynamically assemble the above three pluggable modules to meet customized semantic protection requirements, facilitating the practical deployment in real-world SC systems. Experiments on two public datasets show the effectiveness of our proposed SemProtector, offering some insights of how we reach the goal of secrecy, privacy and integrity of an SC system. Finally, we discuss some future directions for the semantic protection.Comment: Accepted by Communications Magazin

    Implementing Responsible AI: Tensions and Trade-Offs Between Ethics Aspects

    Full text link
    Many sets of ethics principles for responsible AI have been proposed to allay concerns about misuse and abuse of AI/ML systems. The underlying aspects of such sets of principles include privacy, accuracy, fairness, robustness, explainability, and transparency. However, there are potential tensions between these aspects that pose difficulties for AI/ML developers seeking to follow these principles. For example, increasing the accuracy of an AI/ML system may reduce its explainability. As part of the ongoing effort to operationalise the principles into practice, in this work we compile and discuss a catalogue of 10 notable tensions, trade-offs and other interactions between the underlying aspects. We primarily focus on two-sided interactions, drawing on support spread across a diverse literature. This catalogue can be helpful in raising awareness of the possible interactions between aspects of ethics principles, as well as facilitating well-supported judgements by the designers and developers of AI/ML systems

    Identifying Adversarially Attackable and Robust Samples

    Full text link
    Adversarial attacks insert small, imperceptible perturbations to input samples that cause large, undesired changes to the output of deep learning models. Despite extensive research on generating adversarial attacks and building defense systems, there has been limited research on understanding adversarial attacks from an input-data perspective. This work introduces the notion of sample attackability, where we aim to identify samples that are most susceptible to adversarial attacks (attackable samples) and conversely also identify the least susceptible samples (robust samples). We propose a deep-learning-based method to detect the adversarially attackable and robust samples in an unseen dataset for an unseen target model. Experiments on standard image classification datasets enables us to assess the portability of the deep attackability detector across a range of architectures. We find that the deep attackability detector performs better than simple model uncertainty-based measures for identifying the attackable/robust samples. This suggests that uncertainty is an inadequate proxy for measuring sample distance to a decision boundary. In addition to better understanding adversarial attack theory, it is found that the ability to identify the adversarially attackable and robust samples has implications for improving the efficiency of sample-selection tasks, e.g. active learning in augmentation for adversarial training
    • …
    corecore