121 research outputs found

    Bio-inspired network security for 5G-enabled IoT applications

    Get PDF
    Every IPv6-enabled device connected and communicating over the Internet forms the Internet of things (IoT) that is prevalent in society and is used in daily life. This IoT platform will quickly grow to be populated with billions or more objects by making every electrical appliance, car, and even items of furniture smart and connected. The 5th generation (5G) and beyond networks will further boost these IoT systems. The massive utilization of these systems over gigabits per second generates numerous issues. Owing to the huge complexity in large-scale deployment of IoT, data privacy and security are the most prominent challenges, especially for critical applications such as Industry 4.0, e-healthcare, and military. Threat agents persistently strive to find new vulnerabilities and exploit them. Therefore, including promising security measures to support the running systems, not to harm or collapse them, is essential. Nature-inspired algorithms have the capability to provide autonomous and sustainable defense and healing mechanisms. This paper first surveys the 5G network layer security for IoT applications and lists the network layer security vulnerabilities and requirements in wireless sensor networks, IoT, and 5G-enabled IoT. Second, a detailed literature review is conducted with the current network layer security methods and the bio-inspired techniques for IoT applications exchanging data packets over 5G. Finally, the bio-inspired algorithms are analyzed in the context of providing a secure network layer for IoT applications connected over 5G and beyond networks

    A simplified optimization for resource management in cognitive radio network-based internet-of-things over 5G networks

    Get PDF
    With increasing evolution of applications and services in internet-of-things (IoT), there is an increasing concern of offering superior quality of service to its ever-increasing user base. This demand can be fulfilled by harnessing the potential of cognitive radio network (CRN) where better accessibility of services and resources can be achieved. However, existing review of literature shows that there are still open-end issues in this regard and hence, the proposed system offers a solution to address this problem. This paper presents a model which is capable of performing an optimization of resources when CRN is integrated in IoT using five generation (5G) network. The implementation uses analytical modeling to frame up the process of topology construction for IoT and optimizing the resources by introducing a simplified data transmission mechanism in IoT environment. The study outcome shows proposed system to excel better performance with respect to throughput and response time in comparison to existing schemes

    On the Feasibility of 5G Slice Resource Allocation With Spectral Efficiency: A Probabilistic Characterization

    Get PDF
    An important concern that 5G networks face is supporting a wide range of services and use cases with heterogeneous requirements. Radio access network (RAN) slices, understood as isolated virtual networks that share a common infrastructure, are a possible answer to this very demanding scenario and enable virtual operators to provide differentiated services over independent logical entities. This article addresses the feasibility of forming 5G slices, answering the question of whether the available capacity (resources) is sufficient to satisfy slice requirements. As spectral efficiency is one of the key metrics in 5G networks, we introduce the minislot-based slicing allocation (MISA) model, a novel 5G slice resource allocation approach that combines the utilization of both complete slots (or physical resource blocks) and mini-slots with the adequate physical layer design and service requirement constraints. We advocate for a probabilistic characterization that allows to estimate feasibility and characterize the behavior of the constraints, while an exhaustive search is very computationally demanding and the methods to check feasibility provide no information on the constraints. In such a characterization, the concept of phase transition allows for the identification of a clear frontier between the feasible and infeasible regions. Our method relies on an adaptation of the Wang-Landau algorithm to determine the existence of, at least, one solution to the problem. The conducted simulations show a significant improvement in spectral efficiency and feasibility of the MISA approach compared to the slot-based formulation, the identification of the phase transition, and valuable results to characterize the satisfiability of the constraints.The work of J. J. Escudero-Garzás was supported in part by the Spanish National Project TERESA-ADA (MINECO/AEI/FEDER, UE) under Grant TEC2017-90093-C3-2-R, and in part by the National Spectrum Consortium, USA, under Project NSC-16-0140

    Orchestration in the Cloud-to-Things Compute Continuum: Taxonomy, Survey and Future Directions

    Full text link
    IoT systems are becoming an essential part of our environment. Smart cities, smart manufacturing, augmented reality, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems has been increasing rapidly. These IoT use cases often require simultaneous access to geographically distributed arrays of sensors, and heterogeneous remote, local as well as multi-cloud computational resources. This gives birth to the extended Cloud-to-Things computing paradigm. The emergence of this new paradigm raised the quintessential need to extend the orchestration requirements i.e., the automated deployment and run-time management) of applications from the centralised cloud-only environment to the entire spectrum of resources in the Cloud-to-Things continuum. In order to cope with this requirement, in the last few years, there has been a lot of attention to the development of orchestration systems in both industry and academic environments. This paper is an attempt to gather the research conducted in the orchestration for the Cloud-to-Things continuum landscape and to propose a detailed taxonomy, which is then used to critically review the landscape of existing research work. We finally discuss the key challenges that require further attention and also present a conceptual framework based on the conducted analysis.Comment: Journal of Cloud Computing Pages: 2

    Understanding O-RAN: Architecture, Interfaces, Algorithms, Security, and Research Challenges

    Full text link
    The Open Radio Access Network (RAN) and its embodiment through the O-RAN Alliance specifications are poised to revolutionize the telecom ecosystem. O-RAN promotes virtualized RANs where disaggregated components are connected via open interfaces and optimized by intelligent controllers. The result is a new paradigm for the RAN design, deployment, and operations: O-RAN networks can be built with multi-vendor, interoperable components, and can be programmatically optimized through a centralized abstraction layer and data-driven closed-loop control. Therefore, understanding O-RAN, its architecture, its interfaces, and workflows is key for researchers and practitioners in the wireless community. In this article, we present the first detailed tutorial on O-RAN. We also discuss the main research challenges and review early research results. We provide a deep dive of the O-RAN specifications, describing its architecture, design principles, and the O-RAN interfaces. We then describe how the O-RAN RAN Intelligent Controllers (RICs) can be used to effectively control and manage 3GPP-defined RANs. Based on this, we discuss innovations and challenges of O-RAN networks, including the Artificial Intelligence (AI) and Machine Learning (ML) workflows that the architecture and interfaces enable, security and standardization issues. Finally, we review experimental research platforms that can be used to design and test O-RAN networks, along with recent research results, and we outline future directions for O-RAN development.Comment: 33 pages, 16 figures, 3 tables. Submitted for publication to the IEE

    Admission Control Optimisation for QoS and QoE Enhancement in Future Networks

    Get PDF
    Recent exponential growth in demand for traffic heterogeneity support and the number of associated devices has considerably increased demand for network resources and induced numerous challenges for the networks, such as bottleneck congestion, and inefficient admission control and resource allocation. Challenges such as these degrade network Quality of Service (QoS) and user-perceived Quality of Experience (QoE). This work studies admission control from various perspectives. For example, two novel single-objective optimisation-based admission control models, Dynamica Slice Allocation and Admission Control (DSAAC) and Signalling and Admission Control (SAC), are presented to enhance future limited-capacity network Grade of Service (GoS), and for control signalling optimisation, respectively. DSAAC is an integrated model whereby a cost-estimation function based on user demand and network capacity quantifies resource allocation among users. Moreover, to maximise resource utility, adjustable minimum and maximum slice resource bounds have also been derived. In the case of user blocking from the primary slice due to congestion or resource scarcity, a set of optimisation algorithms on inter-slice admission control and resource allocation and adaptability of slice elasticity have been proposed. A novel SAC model uses an unsupervised learning technique (i.e. Ranking-based clustering) for optimal clustering based on users’ homogeneous demand characteristics to minimise signalling redundancy in the access network. The redundant signalling reduction reduces the additional burden on the network in terms of unnecessary resource utilisation and computational time. Moreover, dynamically reconfigurable QoE-based slice performance bounds are also derived in the SAC model from multiple demand characteristics for clustered user admission to the optimal network. A set of optimisation algorithms are also proposed to attain efficient slice allocation and users’ QoE enhancement via assessing the capability of slice QoE elasticity. An enhancement of the SAC model is proposed through a novel multi-objective optimisation model named Edge Redundancy Minimisation and Admission Control (E-RMAC). A novel E-RMAC model for the first time considers the issue of redundant signalling between the edge and core networks. This model minimises redundant signalling using two classical unsupervised learning algorithms, K-mean and Ranking-based clustering, and maximises the efficiency of the link (bandwidth resources) between the edge and core networks. For multi-operator environments such as Open-RAN, a novel Forecasting and Admission Control (FAC) model for tenant-aware network selection and configuration is proposed. The model features a dynamic demand-estimation scheme embedded with fuzzy-logic-based optimisation for optimal network selection and admission control. FAC for the first time considers the coexistence of the various heterogeneous cellular technologies (2G, 3G,4G, and 5G) and their integration to enhance overall network throughput by efficient resource allocation and utilisation within a multi-operator environment. A QoS/QoE-based service monitoring feature is also presented to update the demand estimates with the support of a forecasting modifier. he provided service monitoring feature helps resource allocation to tenants, approximately closer to the actual demand of the tenants, to improve tenant-acquired QoE and overall network performance. Foremost, a novel and dynamic admission control model named Slice Congestion and Admission Control (SCAC) is also presented in this thesis. SCAC employs machine learning (i.e. unsupervised, reinforcement, and transfer learning) and multi-objective optimisation techniques (i.e. Non-dominated Sorting Genetic Algorithm II ) to minimise bottleneck and intra-slice congestion. Knowledge transfer among requests in form of coefficients has been employed for the first time for optimal slice requests queuing. A unified cost estimation function is also derived in this model for slice selection to ensure fairness among slice request admission. In view of instantaneous network circumstances and load, a reinforcement learning-based admission control policy is established for taking appropriate action on guaranteed soft and best-effort slice requests admissions. Intra-slice, as well as inter-slice resource allocation, along with the adaptability of slice elasticity, are also proposed for maximising slice acceptance ratio and resource utilisation. Extensive simulation results are obtained and compared with similar models found in the literature. The proposed E-RMAC model is 35% superior at reducing redundant signalling between the edge and core networks compared to recent work. The E-RMAC model reduces the complexity from O(U) to O(R) for service signalling and O(N) for resource signalling. This represents a significant saving in the uplink control plane signalling and link capacity compared to the results found in the existing literature. Similarly, the SCAC model reduces bottleneck congestion by approximately 56% over the entire load compared to ground truth and increases the slice acceptance ratio. Inter-slice admission and resource allocation offer admission gain of 25% and 51% over cooperative slice- and intra-slice-based admission control and resource allocation, respectively. Detailed analysis of the results obtained suggests that the proposed models can efficiently manage future heterogeneous traffic flow in terms of enhanced throughput, maximum network resources utilisation, better admission gain, and congestion control

    Deep Learning -Powered Computational Intelligence for Cyber-Attacks Detection and Mitigation in 5G-Enabled Electric Vehicle Charging Station

    Get PDF
    An electric vehicle charging station (EVCS) infrastructure is the backbone of transportation electrification. However, the EVCS has various cyber-attack vulnerabilities in software, hardware, supply chain, and incumbent legacy technologies such as network, communication, and control. Therefore, proactively monitoring, detecting, and defending against these attacks is very important. The state-of-the-art approaches are not agile and intelligent enough to detect, mitigate, and defend against various cyber-physical attacks in the EVCS system. To overcome these limitations, this dissertation primarily designs, develops, implements, and tests the data-driven deep learning-powered computational intelligence to detect and mitigate cyber-physical attacks at the network and physical layers of 5G-enabled EVCS infrastructure. Also, the 5G slicing application to ensure the security and service level agreement (SLA) in the EVCS ecosystem has been studied. Various cyber-attacks such as distributed denial of services (DDoS), False data injection (FDI), advanced persistent threats (APT), and ransomware attacks on the network in a standalone 5G-enabled EVCS environment have been considered. Mathematical models for the mentioned cyber-attacks have been developed. The impact of cyber-attacks on the EVCS operation has been analyzed. Various deep learning-powered intrusion detection systems have been proposed to detect attacks using local electrical and network fingerprints. Furthermore, a novel detection framework has been designed and developed to deal with ransomware threats in high-speed, high-dimensional, multimodal data and assets from eccentric stakeholders of the connected automated vehicle (CAV) ecosystem. To mitigate the adverse effects of cyber-attacks on EVCS controllers, novel data-driven digital clones based on Twin Delayed Deep Deterministic Policy Gradient (TD3) Deep Reinforcement Learning (DRL) has been developed. Also, various Bruteforce, Controller clones-based methods have been devised and tested to aid the defense and mitigation of the impact of the attacks of the EVCS operation. The performance of the proposed mitigation method has been compared with that of a benchmark Deep Deterministic Policy Gradient (DDPG)-based digital clones approach. Simulation results obtained from the Python, Matlab/Simulink, and NetSim software demonstrate that the cyber-attacks are disruptive and detrimental to the operation of EVCS. The proposed detection and mitigation methods are effective and perform better than the conventional and benchmark techniques for the 5G-enabled EVCS

    Orchestration in the Cloud-to-Things compute continuum: taxonomy, survey and future directions

    Get PDF
    IoT systems are becoming an essential part of our environment. Smart cities, smart manufacturing, augmented reality, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems have been increasing rapidly. These IoT use cases often require simultaneous access to geographically distributed arrays of sensors, heterogeneous remote, local as well as multi-cloud computational resources. This gives birth to the extended Cloud-to-Things computing paradigm. The emergence of this new paradigm raised the quintessential need to extend the orchestration requirements (i.e., the automated deployment and run-time management) of applications from the centralised cloud-only environment to the entire spectrum of resources in the Cloud-to-Things continuum. In order to cope with this requirement, in the last few years, there has been a lot of attention to the development of orchestration systems in both industry and academic environments. This paper is an attempt to gather the research conducted in the orchestration for the Cloud-to-Things continuum landscape and to propose a detailed taxonomy, which is then used to critically review the landscape of existing research work. We finally discuss the key challenges that require further attention and also present a conceptual framework based on the conducted analysis

    Quantum Machine Learning for 6G Communication Networks: State-of-the-Art and Vision for the Future

    Get PDF
    The upcoming 5th Generation (5G) of wireless networks is expected to lay a foundation of intelligent networks with the provision of some isolated Artificial Intelligence (AI) operations. However, fully-intelligent network orchestration and management for providing innovative services will only be realized in Beyond 5G (B5G) networks. To this end, we envisage that the 6th Generation (6G) of wireless networks will be driven by on-demand self-reconfiguration to ensure a many-fold increase in the network performanceandservicetypes.Theincreasinglystringentperformancerequirementsofemergingnetworks may finally trigger the deployment of some interesting new technologies such as large intelligent surfaces, electromagnetic-orbital angular momentum, visible light communications and cell-free communications – tonameafew.Ourvisionfor6Gis–amassivelyconnectedcomplexnetworkcapableofrapidlyresponding to the users’ service calls through real-time learning of the network state as described by the network-edge (e.g., base-station locations, cache contents, etc.), air interface (e.g., radio spectrum, propagation channel, etc.), and the user-side (e.g., battery-life, locations, etc.). The multi-state, multi-dimensional nature of the network state, requiring real-time knowledge, can be viewed as a quantum uncertainty problem. In this regard, the emerging paradigms of Machine Learning (ML), Quantum Computing (QC), and Quantum ML (QML) and their synergies with communication networks can be considered as core 6G enablers. Considering these potentials, starting with the 5G target services and enabling technologies, we provide a comprehensivereviewoftherelatedstate-of-the-artinthedomainsofML(includingdeeplearning),QCand QML, and identify their potential benefits, issues and use cases for their applications in the B5G networks. Subsequently,weproposeanovelQC-assistedandQML-basedframeworkfor6Gcommunicationnetworks whilearticulatingitschallengesandpotentialenablingtechnologiesatthenetwork-infrastructure,networkedge, air interface and user-end. Finally, some promising future research directions for the quantum- and QML-assisted B5G networks are identified and discussed
    corecore