41 research outputs found

    Security and Privacy for Modern Wireless Communication Systems

    Get PDF
    The aim of this reprint focuses on the latest protocol research, software/hardware development and implementation, and system architecture design in addressing emerging security and privacy issues for modern wireless communication networks. Relevant topics include, but are not limited to, the following: deep-learning-based security and privacy design; covert communications; information-theoretical foundations for advanced security and privacy techniques; lightweight cryptography for power constrained networks; physical layer key generation; prototypes and testbeds for security and privacy solutions; encryption and decryption algorithm for low-latency constrained networks; security protocols for modern wireless communication networks; network intrusion detection; physical layer design with security consideration; anonymity in data transmission; vulnerabilities in security and privacy in modern wireless communication networks; challenges of security and privacy in node–edge–cloud computation; security and privacy design for low-power wide-area IoT networks; security and privacy design for vehicle networks; security and privacy design for underwater communications networks

    Time-slot based architecture for power beam-assisted relay techniques in CR-WSNs with transceiver hardware inadequacies

    Get PDF
    Over the past two decades, numerous research projects have concentrated on cognitive radio wireless sensor networks (CR-WSNs) and their benefits. To tackle the problem of energy and spectrum shortfall in CR-WSNs, this research proposes an underpinning decode-&-forward (DF) relaying technique. Using the suggested time-slot architecture (TSA), this technique harvests energy from a multi-antenna power beam (PB) and delivers source information to the target utilizing energy-constrained secondary source and relay nodes. The study considers three proposed relay selection schemes: enhanced hybrid partial relay selection (E-HPRS), conventional opportunistic relay selection (C-ORS), and leading opportunistic relay selection (L-ORS). We present evidence for the sustainability of the suggested methods by examining the outage probability (OP) and throughput (TPT) under multiple primary users (PUs). These systems leverage time switching (TS) receiver design to increase end-to-end performance while taking into account the maximum interference constraint and transceiver hardware inadequacies. In order to assess the efficacy of the proposed methods, we derive the exact and asymptotic closed-form equations for OP and TPT & develop an understanding to learn how they affect the overall performance all across the Rayleigh fading channel. The results show that OP of the L-ORS protocol is 16% better than C-ORS and 75% better than E-HPRS in terms of transmitting SNR. The OP of L-ORS is 30% better than C-ORS and 55% better than E-HPRS in terms of hardware inadequacies at the destination. The L-ORS technique outperforms C-ORS and E-HPRS in terms of TPT by 4% and 11%, respectively

    Reinforcement Learning Based Resource Allocation for Energy-Harvesting-Aided D2D Communications in IoT Networks

    Get PDF
    It is anticipated that mobile data traffic and the demand for higher data rates will increase dramatically as a result of the explosion of wireless devices, such as the Internet of Things (IoT) and machine-to-machine communication. There are numerous location-based peer-to-peer services available today that allow mobile users to communicate directly with one another, which can help offload traffic from congested cellular networks. In cellular networks, Device-to-Device (D2D) communication has been introduced to exploit direct links between devices instead of transmitting through a the Base Station (BS). However, it is critical to note that D2D and IoT communications are hindered heavily by the high energy consumption of mobile devices and IoT devices. This is because their battery capacity is restricted. There may be a way for energy-constrained wireless devices to extend their lifespan by drawing upon reusable external sources of energy such as solar, wind, vibration, thermoelectric, and radio frequency (RF) energy in order to overcome the limited battery problem. Such approaches are commonly referred to as Energy Harvesting (EH) There is a promising approach to energy harvesting that is called Simultaneous Wireless Information and Power Transfer (SWIPT). Due to the fact that wireless users are on the rise, it is imperative that resource allocation techniques be implemented in modern wireless networks. This will facilitate cooperation among users for limited resources, such as time and frequency bands. As well as ensuring that there is an adequate supply of energy for reliable and efficient communication, resource allocation also provides a roadmap for each individual user to follow in order to consume the right amount of energy. In D2D networks with time, frequency, and power constraints, significant computing power is generally required to achieve a joint resource management design. Thus the purpose of this study is to develop a resource allocation scheme that is based on spectrum sharing and enables low-cost computations for EH-assisted D2D and IoT communication. Until now, there has been no study examining resource allocation design for EH-enabled IoT networks with SWIPT-enabled D2D schemes that utilize learning techniques and convex optimization. In most of the works, optimization and iterative approaches with a high level of computational complexity have been used which is not feasible in many IoT applications. In order to overcome these obstacles, a learning-based resource allocation mechanism based on the SWIPT scheme in IoT networks is proposed, where users are able to harvest energy from different sources. The system model consists of multiple IoT users, one BS, and multiple D2D pairs in EH-based IoT networks. As a means of developing an energy-efficient system, we consider the SWIPT scheme with D2D pairs employing the time switching method (TS) to capture energy from the environment, whereas IoT users employ the power splitting method (PS) to harvest energy from the BS. A mixed-integer nonlinear programming (MINLP) approach is presented for the solution of the Energy Efficiency (EE) problem by jointly optimizing subchannel allocation, power-splitting factor, power, and time together. As part of the optimization approach, the original EE optimization problem is decomposed into three subproblems, namely: (a) subchannel assignment and power splitting factor, (b) power allocation, and (c) time allocation. In order to solve the subproblem assignment problem, which involves discrete variables, the Q-learning approach is employed. Due to the large size of the overall problem and the continuous nature of certain variables, it is impractical to optimize all variables by using the learning technique. Instead dealing for the continuous variable problems, namely power and time allocation, the original non-convex problem is first transformed into a convex one, then the Majorization-Minimization (MM) approach is applied as well as the Dinkelbach. The performance of the proposed joint Q-learning and optimization algorithm has been evaluated in detail. In particular, the solution was compared with a linear EH model, as well as two heuristic algorithms, namely the constrained allocation algorithm and the random allocation algorithm, in order to determine its performance. The results indicate that the technique is superior to conventional approaches. For example, it can be seen that for the distance of d=10d = 10 m, our proposed algorithm leads to EE improvement when compared to the method such as prematching algorithm, constrained allocation, and random allocation methods by about 5.26\%, 110.52\%, and 143.90\%, respectively. Considering the simulation results, the proposed algorithm is superior to other methods in the literature. Using spectrum sharing and harvesting energy from D2D and IoT devices achieves impressive EE gains. This superior performance can be seen both in terms of the average and sum EEs, as well as when compared to other baseline schemes

    Resource Allocation Challenges and Strategies for RF-Energy Harvesting Networks Supporting QoS

    Get PDF
    This paper specifically addresses the resource allocation challenges encountered in wireless sensor networks that incorporate RF energy harvesting capabilities, commonly referred to as RF-energy harvesting networks (RF-EHNs). RF energy harvesting and transmission techniques bring substantial advantages for applications requiring Quality of Service (QoS) support, as they enable proactive replenishment of  wireless devices. We commence by providing an overview of RF-EHNs, followed by an in-depth examination of the resource allocation challenges associated with this technology. In addition, we present a case study that focuses on the design of an efficient operating strategy for RF-EHN receivers. Our investigation highlights the critical aspects of service differentiation and QoS support, which have received limited attention in previous research. Besides, we explore previously unexplored areas within these domains

    Terahertz Communications and Sensing for 6G and Beyond: A Comprehensive View

    Full text link
    The next-generation wireless technologies, commonly referred to as the sixth generation (6G), are envisioned to support extreme communications capacity and in particular disruption in the network sensing capabilities. The terahertz (THz) band is one potential enabler for those due to the enormous unused frequency bands and the high spatial resolution enabled by both short wavelengths and bandwidths. Different from earlier surveys, this paper presents a comprehensive treatment and technology survey on THz communications and sensing in terms of the advantages, applications, propagation characterization, channel modeling, measurement campaigns, antennas, transceiver devices, beamforming, networking, the integration of communications and sensing, and experimental testbeds. Starting from the motivation and use cases, we survey the development and historical perspective of THz communications and sensing with the anticipated 6G requirements. We explore the radio propagation, channel modeling, and measurements for THz band. The transceiver requirements, architectures, technological challenges, and approaches together with means to compensate for the high propagation losses by appropriate antenna and beamforming solutions. We survey also several system technologies required by or beneficial for THz systems. The synergistic design of sensing and communications is explored with depth. Practical trials, demonstrations, and experiments are also summarized. The paper gives a holistic view of the current state of the art and highlights the issues and challenges that are open for further research towards 6G.Comment: 55 pages, 10 figures, 8 tables, submitted to IEEE Communications Surveys & Tutorial

    Multifunction Radios and Interference Suppression for Enhanced Reliability and Security of Wireless Systems

    Get PDF
    Wireless connectivity, with its relative ease of over-the-air information sharing, is a key technological enabler that facilitates many of the essential applications, such as satellite navigation, cellular communication, and media broadcasting, that are nowadays taken for granted. However, that relative ease of over-the-air communications has significant drawbacks too. On one hand, the broadcast nature of wireless communications means that one receiver can receive the superposition of multiple transmitted signals. But on the other hand, it means that multiple receivers can receive the same transmitted signal. The former leads to congestion and concerns about reliability because of the limited nature of the electromagnetic spectrum and the vulnerability to interference. The latter means that wirelessly transmitted information is inherently insecure. This thesis aims to provide insights and means for improving physical layer reliability and security of wireless communications by, in a sense, combining the two aspects above through simultaneous and same frequency transmit and receive operation. This is so as to ultimately increase the safety of environments where wireless devices function or where malicious wirelessly operated devices (e.g., remote-controlled drones) potentially raise safety concerns. Specifically, two closely related research directions are pursued. Firstly, taking advantage of in-band full-duplex (IBFD) radio technology to benefit the reliability and security of wireless communications in the form of multifunction IBFD radios. Secondly, extending the self-interference cancellation (SIC) capabilities of IBFD radios to multiradio platforms to take advantage of these same concepts on a wider scale. Within the first research direction, a theoretical analysis framework is developed and then used to comprehensively study the benefits and drawbacks of simultaneously combining signals detection and jamming on the same frequency within a single platform. Also, a practical prototype capable of such operation is implemented and its performance analyzed based on actual measurements. The theoretical and experimental analysis altogether give a concrete understanding of the quantitative benefits of simultaneous same-frequency operations over carrying out the operations in an alternating manner. Simultaneously detecting and jamming signals specifically is shown to somewhat increase the effective range of a smart jammer compared to intermittent detection and jamming, increasing its reliability. Within the second research direction, two interference mitigation methods are proposed that extend the SIC capabilities from single platform IBFD radios to those not physically connected. Such separation brings additional challenges in modeling the interference compared to the SIC problem, which the proposed methods address. These methods then allow multiple radios to intentionally generate and use interference for controlling access to the electromagnetic spectrum. Practical measurement results demonstrate that this effectively allows the use of cooperative jamming to prevent unauthorized nodes from processing any signals of interest, while authorized nodes can use interference mitigation to still access the same signals. This in turn provides security at the physical layer of wireless communications

    Ultra-Dense Networks in 5G and Beyond: Challenges and Promising Solutions

    Get PDF
    Ultra-Dense Network (UDN) is one of the promising and leading directions in Fifth Generation and beyond (5GB) networks. In UDNs, Small Cells (SCs) or Small Base Stations (SBSs) such as microcells, picocells, or femtocells are deployed in high densities where inter-site distances are within the range of few or tens of meters. UDNs also require that SCs are typically deployed in relatively large densities compared to the Human-Type Communication Users (HTCUs) such as smartphones, tablets, and/or laptops. Such SCs are characterized by their low transmission powers, small coverage areas, and low cost. Hence, the deployment of the SCs can be done either by the cellular network operators or by the customers themselves within their premises to maintain certain levels of Quality of Service (QoS). However, the randomness of the deployment of the SCs along with the small inter-site distances may degrade the achievable performance due to the uncontrolled Inter-Cell Interference (ICI). Therefore, idle mode capability is an inevitable feature in the high-density regime of SCs. In idle mode, a SC is switched off to prevent ICI when no user is associated to it. In doing so, we can imagine the UDN as a mobile network that keeps following the users to remain as close as possible to them. In 5G, different use cases are required to be supported such as enhanced Mobile Broad-Band (eMBB), Ultra-Reliable and Low-Latency Communication (URLLC), and massive Machine-Type Communication (mMTC). On one hand, the inevitable upcoming era of smart living requires unprecedented advances in enabling technologies to support the main building blocks of this era which are Internet of Things (IoT) devices. Machine-Type Communication (MTC), the cellular version of Machine-to-Machine (M2M) communication, constitutes the main enabling technology to support communications among such devices with minimal or even without human intervention. The massive number of these devices, Machine-Type Communication Devices (MTCDs), and the immense amount of traffic generated by them require a paramount shift in cellular and non-cellular wireless technologies to achieve the required connectivity. On the other hand, the sky-rocketing number of data hungry applications installed on human-held devices, or HTCUs, such as video conferencing and virtual reality applications require their own advances in the wireless infrastructure in terms of high capacity, enhanced reliability, and reduced latency. Throughout this thesis, we exploit the UDN infrastructure integrated with other 5G resources and enabling technologies to explore the possible opportunities in supporting both HTC and MTC, either solely or simultaneously. Given the shorter distances between transmitters and receivers encountered in UDNs, more realistic models of the path loss must be adopted such as the Stretched Exponential Path Loss (SEPL) model. We use tools from stochastic geometry to formulate novel mathematical frameworks that can be used to investigate the achievable performance without having to rely on extensive time-consuming Monte-Carlo simulations. Besides, the derived analytical expressions can be used to tune some system parameters or to propose some approaches/techniques that can be followed to optimize the performance of the system under certain circumstances. Tackling practical scenarios, the complexity, or sometimes in-feasibility, of providing unlimited backhaul capacity for the massive number of SCs must be considered. In this regard, we adopt multiple-association where each HTCU is allowed to associate with multiple SCs. By doing so, we carefully split the targeted traffic among several backhaul links to mitigate the bottleneck forced by limited backhaul capacities. It is noteworthy that for coexisting MTCDs with the HTCUs, activating more SCs would allow more MTCDs to be supported without introducing additional ICI towards the HTCUs. Targeting different application, multiple-association can be also adopted to tackle computation-intensive applications of HTCUs. In particular, for applications such as augmented reality and environment recognition that require heavy computations, a task is split and partially offloaded to multiple SCs with integrated Edge Computing Servers (ECSs). Then, the task partitions are processed in parallel to reduce the end-to-end processing delay. Based on relative densities between HTCUs and SCs, we use tools from stochastic geometry to develop an offline adaptive task division technique that further reduces the average end-to-end processing delay per user. With the frequent serious data breaches experienced in recent years, securing data has become more of a business risk rather than an information technology (IT) issue. Hence, we exploit the dense number of SCs found in UDN along with Physical Layer Security (PLS) protocols to secure data transfer. In particular, we again adopt multiple-association and split the data of HTCUs into multiple streams originating from different SCs to prevent illegitimate receivers from eavesdropping. To support massive number of MTCDs, we deploy the Non-Orthogonal Multiple-Access (NOMA) technique. Using power NOMA, more than one device can be supported over the same frequency/time resource and their signals are distinguished at the receiver using Successive Interference Cancellation (SIC). In the same scope, exploiting the available resources in 5G and beyond networks, we investigate a mMTC scenario in an UDN operating in the Millimeter Wave (mmWave) band and supported by wireless backhauling. In doing so, we shed lights on the possible gains of utilizing the mmWave band where the severe penetration losses of mmWave can be exploited to mitigate the significant ICI in UDNs. Also, the vast bandwidth available in the mmWave band helps to allocate more Resource Blocks (RBs) per SCs which corresponds to supporting more MTCDs

    Spectral Efficiency Enhancement using Hybrid Pre-Coder Based Spectrum Handover Mechanism

    Get PDF
    Recently, the use of Millimeter-wave (mm-Wave) has immensely enhanced in various communication applications due to massive technological developments in wireless communications. Furthermore, mm-Wave consists of high bandwidth spectrum which can handle large demands of data transmission and internet services. However, high interference is observed in previous researches at the time of spectrum handover from secondary (unlicensed) users to primary (licensed) users. Thus, interference reduction by achieving high spectral efficiency and easy spectrum handoff process with minimum delay is an important research area. Therefore, a Hybrid Pre-coder Design based Spectrum Handoff (HPDSH) Algorithm is proposed in this article to increase spectrum efficiency in Cognitive Radio Networks (CRNs) and to access large bandwidth spectrum of mm-Wave system to meet the high data rate demands of current cellular networks. Moreover, a HPDSH Algorithm is presented to enhance spectral efficiency and this algorithm is utilized to take handover decisions and select backup channels. Here, different scenarios and parameters are considered to evaluate performance efficiency of proposed HPDSH Algorithm in terms of spectral efficiency and Signal to Noise (SNR) ratio. The proposed HPDSH Algorithm is compared against varied traditional spectrum handoff methods. Moreover, it is clearly evident from performance results that the proposed HPDSH Algorithm performs better than the other spectrum handoff method

    Five Facets of 6G: Research Challenges and Opportunities

    Full text link
    Whilst the fifth-generation (5G) systems are being rolled out across the globe, researchers have turned their attention to the exploration of radical next-generation solutions. At this early evolutionary stage we survey five main research facets of this field, namely {\em Facet~1: next-generation architectures, spectrum and services, Facet~2: next-generation networking, Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing, as well as Facet~5: applications of deep learning in 6G networks.} In this paper, we have provided a critical appraisal of the literature of promising techniques ranging from the associated architectures, networking, applications as well as designs. We have portrayed a plethora of heterogeneous architectures relying on cooperative hybrid networks supported by diverse access and transmission mechanisms. The vulnerabilities of these techniques are also addressed and carefully considered for highlighting the most of promising future research directions. Additionally, we have listed a rich suite of learning-driven optimization techniques. We conclude by observing the evolutionary paradigm-shift that has taken place from pure single-component bandwidth-efficiency, power-efficiency or delay-optimization towards multi-component designs, as exemplified by the twin-component ultra-reliable low-latency mode of the 5G system. We advocate a further evolutionary step towards multi-component Pareto optimization, which requires the exploration of the entire Pareto front of all optiomal solutions, where none of the components of the objective function may be improved without degrading at least one of the other components

    IoT Applications Computing

    Get PDF
    The evolution of emerging and innovative technologies based on Industry 4.0 concepts are transforming society and industry into a fully digitized and networked globe. Sensing, communications, and computing embedded with ambient intelligence are at the heart of the Internet of Things (IoT), the Industrial Internet of Things (IIoT), and Industry 4.0 technologies with expanding applications in manufacturing, transportation, health, building automation, agriculture, and the environment. It is expected that the emerging technology clusters of ambient intelligence computing will not only transform modern industry but also advance societal health and wellness, as well as and make the environment more sustainable. This book uses an interdisciplinary approach to explain the complex issue of scientific and technological innovations largely based on intelligent computing
    corecore