220 research outputs found

    Fortified Anonymous Communication Protocol for Location Privacy in WSN: A Modular Approach

    Get PDF
    Wireless sensor network (WSN) consists of many hosts called sensors. These sensors can sense a phenomenon (motion, temperature, humidity, average, max, min, etc.) and represent what they sense in a form of data. There are many applications for WSNs including object tracking and monitoring where in most of the cases these objects need protection. In these applications, data privacy itself might not be as important as the privacy of source location. In addition to the source location privacy, sink location privacy should also be provided. Providing an efficient end-to-end privacy solution would be a challenging task to achieve due to the open nature of the WSN. The key schemes needed for end-to-end location privacy are anonymity, observability, capture likelihood, and safety period. We extend this work to allow for countermeasures against multi-local and global adversaries. We present a network model protected against a sophisticated threat model: passive /active and local/multi-local/global attacks. This work provides a solution for end-to-end anonymity and location privacy as well. We will introduce a framework called fortified anonymous communication (FAC) protocol for WSN.http://dx.doi.org/10.3390/s15030582

    Machine Learning Empowered Resource Allocation for NOMA Enabled IoT Networks

    Get PDF
    The Internet of things (IoT) is one of the main use cases of ultra massive machine type communications (umMTC), which aims to connect large-scale short packet sensors or devices in sixth-generation (6G) systems. This rapid increase in connected devices requires efficient utilization of limited spectrum resources. To this end, non-orthogonal multiple access (NOMA) is considered a promising solution due to its potential for massive connectivity over the same time/frequency resource block (RB). The IoT users’ have the characteristics of different features such as sporadic transmission, high battery life cycle, minimum data rate requirements, and different QoS requirements. Therefore, keeping in view these characteristics, it is necessary for IoT networks with NOMA to allocate resources more appropriately and efficiently. Moreover, due to the absence of 1) learning capabilities, 2) scalability, 3) low complexity, and 4) long-term resource optimization, conventional optimization approaches are not suitable for IoT networks with time-varying communication channels and dynamic network access. This thesis provides machine learning (ML) based resource allocation methods to optimize the long-term resources for IoT users according to their characteristics and dynamic environment. First, we design a tractable framework based on model-free reinforcement learning (RL) for downlink NOMA IoT networks to allocate resources dynamically. More specifically, we use actor critic deep reinforcement learning (ACDRL) to improve the sum rate of IoT users. This model can optimize the resource allocation for different users in a dynamic and multi-cell scenario. The state space in the proposed framework is based on the three-dimensional association among multiple IoT users, multiple base stations (BSs), and multiple sub-channels. In order to find the optimal resources solution for the maximization of sum rate problem in network and explore the dynamic environment better, this work utilizes the instantaneous data rate as a reward. The proposed ACDRL algorithm is scalable and handles different network loads. The proposed ACDRL-D and ACDRL-C algorithms outperform DRL and RL in terms of convergence speed and data rate by 23.5\% and 30.3\%, respectively. Additionally, the proposed scheme provides better sum rate as compare to orthogonal multiple access (OMA). Second, similar to sum rate maximization problem, energy efficiency (EE) is a key problem, especially for applications where battery replacement is costly or difficult to replace. For example, the sensors with different QoS requirements are deployed in radioactive areas, hidden in walls, and in pressurized pipes. Therefore, for such scenarios, energy cooperation schemes are required. To maximize the EE of different IoT users, i.e., grant-free (GF) and grant-based (GB) in the network with uplink NOMA, we propose an RL based semi-centralized optimization framework. In particular, this work applied proximal policy optimization (PPO) algorithm for GB users and to optimize the EE for GF users, a multi-agent deep Q-network where used with the aid of a relay node. Numerical results demonstrate that the suggested algorithm increases the EE of GB users compared to random and fixed power allocations methods. Moreover, results shows superiority in the EE of GF users over the benchmark scheme (convex optimization). Furthermore, we show that the increase in the number of GB users has a strong correlation with the EE of both types of users. Third, we develop an efficient model-free backscatter communication (BAC) approach with simultaneously downlink and uplink NOMA system to jointly optimize the transmit power of downlink IoT users and the reflection coefficient of uplink backscatter devices using a reinforcement learning algorithm, namely, soft actor critic (SAC). With the advantage of entropy regularization, the SAC agent learns to explore and exploit the dynamic BAC-NOMA network efficiently. Numerical results unveil the superiority of the proposed algorithm over the conventional optimization approach in terms of the average sum rate of uplink backscatter devices. We show that the network with multiple downlink users obtained a higher reward for a large number of iterations. Moreover, the proposed algorithm outperforms the benchmark scheme and BAC with OMA in terms of sum rate, self-interference coefficients, noise levels, QoS requirements, and cell radii

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Timing and Carrier Synchronization in Wireless Communication Systems: A Survey and Classification of Research in the Last 5 Years

    Get PDF
    Timing and carrier synchronization is a fundamental requirement for any wireless communication system to work properly. Timing synchronization is the process by which a receiver node determines the correct instants of time at which to sample the incoming signal. Carrier synchronization is the process by which a receiver adapts the frequency and phase of its local carrier oscillator with those of the received signal. In this paper, we survey the literature over the last 5 years (2010–2014) and present a comprehensive literature review and classification of the recent research progress in achieving timing and carrier synchronization in single-input single-output (SISO), multiple-input multiple-output (MIMO), cooperative relaying, and multiuser/multicell interference networks. Considering both single-carrier and multi-carrier communication systems, we survey and categorize the timing and carrier synchronization techniques proposed for the different communication systems focusing on the system model assumptions for synchronization, the synchronization challenges, and the state-of-the-art synchronization solutions and their limitations. Finally, we envision some future research directions

    Energy and Spectral Efficient Wireless Communications

    Get PDF
    Energy and spectrum are two precious commodities for wireless communications. How to improve the energy and spectrum efficiency has become two critical issues for the designs of wireless communication systems. This dissertation is devoted to the development of energy and spectral efficient wireless communications. The developed techniques can be applied to a wide range of wireless communication systems, such as wireless sensor network (WSN) designed for structure health monitoring (SHM), medium access control (MAC) for multi-user systems, and cooperative spectrum sensing in cognitive radio systems. First, to improve the energy efficiency in SHM WSN, a new ultra low power (ULP) WSN is proposed to monitor the vibration properties of structures such as buildings, bridges, and the wings and bodies of aircrafts. The new scheme integrates energy harvesting, data sensing, and wireless communication into a unified process, and it achieves significant energy savings compared to existing WSNs. Second, a cross-layer collision tolerant (CT) MAC scheme is proposed to improve energy and spectral efficiency in a multi-user system with shared medium. When two users transmit simultaneously over a shared medium, a collision happens at the receiver. Conventional MAC schemes will discard the collided signals, which result in a waste of the precious energy and spectrum resources. In our proposed CT-MAC scheme, each user transmits multiple weighted replicas of a packet at randomly selected data slots in a frame, and the indices of the selected slots are transmitted in a special collision-free position slot at the beginning of each frame. Collisions of the data slots in the MAC layer are resolved by using multiuser detection (MUD) in the PHY layer. Compared to existing schemes, the proposed CT-MAC scheme can support more simultaneous users with a higher throughput. Third, a new cooperative spectrum sensing scheme is proposed to improve the energy and spectral efficiency of a cognitive radio network. A new Slepian-Wolf coded cooperation scheme is proposed for a cognitive radio network with two secondary users (SUs) performing cooperative spectrum sensing through a fusion center (FC). The proposed scheme can achieve significant performance gains compared to existing schemes
    • …
    corecore