4,975 research outputs found

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    GPS Anomaly Detection And Machine Learning Models For Precise Unmanned Aerial Systems

    Get PDF
    The rapid development and deployment of 5G/6G networks have brought numerous benefits such as faster speeds, enhanced capacity, improved reliability, lower latency, greater network efficiency, and enablement of new applications. Emerging applications of 5G impacting billions of devices and embedded electronics also pose cyber security vulnerabilities. This thesis focuses on the development of Global Positioning Systems (GPS) Based Anomaly Detection and corresponding algorithms for Unmanned Aerial Systems (UAS). Chapter 1 provides an overview of the thesis background and its objectives. Chapter 2 presents an overview of the 5G architectures, their advantages, and potential cyber threat types. Chapter 3 addresses the issue of GPS dropouts by taking the use case of the Dallas-Fort Worth (DFW) airport. By analyzing data from surveillance drones in the (DFW) area, its message frequency, and statistics on time differences between GPS messages were examined. Chapter 4 focuses on modeling and detecting false data injection (FDI) on GPS. Specifically, three scenarios, including Gaussian noise injection, data duplication, data manipulation are modeled. Further, multiple detection schemes that are Clustering-based and reinforcement learning techniques are deployed and detection accuracy were investigated. Chapter 5 shows the results of Chapters 3 and 4. Overall, this research provides a categorization and possible outlier detection to minimize the GPS interference for UAS enhancing the security and reliability of UAS operations

    PERFORMANCE STUDY FOR CAPILLARY MACHINE-TO-MACHINE NETWORKS

    Get PDF
    Communication technologies witness a wide and rapid pervasiveness of wireless machine-to-machine (M2M) communications. It is emerging to apply for data transfer among devices without human intervention. Capillary M2M networks represent a candidate for providing reliable M2M connectivity. In this thesis, we propose a wireless network architecture that aims at supporting a wide range of M2M applications (either real-time or non-real-time) with an acceptable QoS level. The architecture uses capillary gateways to reduce the number of devices communicating directly with a cellular network such as LTE. Moreover, the proposed architecture reduces the traffic load on the cellular network by providing capillary gateways with dual wireless interfaces. One interface is connected to the cellular network, whereas the other is proposed to communicate to the intended destination via a WiFi-based mesh backbone for cost-effectiveness. We study the performance of our proposed architecture with the aid of the ns-2 simulator. An M2M capillary network is simulated in different scenarios by varying multiple factors that affect the system performance. The simulation results measure average packet delay and packet loss to evaluate the quality-of-service (QoS) of the proposed architecture. Our results reveal that the proposed architecture can satisfy the required level of QoS with low traffic load on the cellular network. It also outperforms a cellular-based capillary M2M network and WiFi-based capillary M2M network. This implies a low cost of operation for the service provider while meeting a high-bandwidth service level agreement. In addition, we investigate how the proposed architecture behaves with different factors like the number of capillary gateways, different application traffic rates, the number of backbone routers with different routing protocols, the number of destination servers, and the data rates provided by the LTE and Wi-Fi technologies. Furthermore, the simulation results show that the proposed architecture continues to be reliable in terms of packet delay and packet loss even under a large number of nodes and high application traffic rates

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    Energy efficiency and interference management in long term evolution-advanced networks.

    Get PDF
    Doctoral Degree. University of KwaZulu-Natal, Durban.Cellular networks are continuously undergoing fast extraordinary evolution to overcome technological challenges. The fourth generation (4G) or Long Term Evolution-Advanced (LTE-Advanced) networks offer improvements in performance through increase in network density, while allowing self-organisation and self-healing. The LTE-Advanced architecture is heterogeneous, consisting of different radio access technologies (RATs), such as macrocell, smallcells, cooperative relay nodes (RNs), having various capabilities, and coexisting in the same geographical coverage area. These network improvements come with different challenges that affect users’ quality of service (QoS) and network performance. These challenges include; interference management, high energy consumption and poor coverage of marginal users. Hence, developing mitigation schemes for these identified challenges is the focus of this thesis. The exponential growth of mobile broadband data usage and poor networks’ performance along the cell edges, result in a large increase of the energy consumption for both base stations (BSs) and users. This due to improper RN placement or deployment that creates severe inter-cell and intracell interferences in the networks. It is therefore, necessary to investigate appropriate RN placement techniques which offer efficient coverage extension while reducing energy consumption and mitigating interference in LTE-Advanced femtocell networks. This work proposes energy efficient and optimal RN placement (EEORNP) algorithm based on greedy algorithm to assure improved and effective coverage extension. The performance of the proposed algorithm is investigated in terms of coverage percentage and number of RN needed to cover marginalised users and found to outperform other RN placement schemes. Transceiver design has gained importance as one of the effective tools of interference management. Centralised transceiver design techniques have been used to improve network performance for LTE-Advanced networks in terms of mean square error (MSE), bit error rate (BER) and sum-rate. The centralised transceiver design techniques are not effective and computationally feasible for distributed cooperative heterogeneous networks, the systems considered in this thesis. This work proposes decentralised transceivers design based on the least-square (LS) and minimum MSE (MMSE) pilot-aided channel estimations for interference management in uplink LTE-Advanced femtocell networks. The decentralised transceiver algorithms are designed for the femtocells, the macrocell user equipments (MUEs), RNs and the cell edge macrocell UEs (CUEs) in the half-duplex cooperative relaying systems. The BER performances of the proposed algorithms with the effect of channel estimation are investigated. Finally, the EE optimisation is investigated in half-duplex multi-user multiple-input multiple-output (MU-MIMO) relay systems. The EE optimisation is divided into sub-optimal EE problems due to the distributed architecture of the MU-MIMO relay systems. The decentralised approach is employed to design the transceivers such as MUEs, CUEs, RN and femtocells for the different sub-optimal EE problems. The EE objective functions are formulated as convex optimisation problems subject to the QoS and transmit powers constraints in case of perfect channel state information (CSI). The non-convexity of the formulated EE optimisation problems is surmounted by introducing the EE parameter substractive function into each proposed algorithms. These EE parameters are updated using the Dinkelbach’s algorithm. The EE optimisation of the proposed algorithms is achieved after finding the optimal transceivers where the unknown interference terms in the transmit signals are designed with the zero-forcing (ZF) assumption and estimation errors are added to improve the EE performances. With the aid of simulation results, the performance of the proposed decentralised schemes are derived in terms of average EE evaluation and found to be better than existing algorithms

    Distributed Cognitive RAT Selection in 5G Heterogeneous Networks: A Machine Learning Approach

    Get PDF
    The leading role of the HetNet (Heterogeneous Networks) strategy as the key Radio Access Network (RAN) architecture for future 5G networks poses serious challenges to the current cell selection mechanisms used in cellular networks. The max-SINR algorithm, although effective historically for performing the most essential networking function of wireless networks, is inefficient at best and obsolete at worst in 5G HetNets. The foreseen embarrassment of riches and diversified propagation characteristics of network attachment points spanning multiple Radio Access Technologies (RAT) requires novel and creative context-aware system designs. The association and routing decisions, in the context of single-RAT or multi-RAT connections, need to be optimized to efficiently exploit the benefits of the architecture. However, the high computational complexity required for multi-parametric optimization of utility functions, the difficulty of modeling and solving Markov Decision Processes, the lack of guarantees of stability of Game Theory algorithms, and the rigidness of simpler methods like Cell Range Expansion and operator policies managed by the Access Network Discovery and Selection Function (ANDSF), makes neither of these state-of-the-art approaches a favorite. This Thesis proposes a framework that relies on Machine Learning techniques at the terminal device-level for Cognitive RAT Selection. The use of cognition allows the terminal device to learn both a multi-parametric state model and effective decision policies, based on the experience of the device itself. This implies that a terminal, after observing its environment during a learning period, may formulate a system characterization and optimize its own association decisions without any external intervention. In our proposal, this is achieved through clustering of appropriately defined feature vectors for building a system state model, supervised classification to obtain the current system state, and reinforcement learning for learning good policies. This Thesis describes the above framework in detail and recommends adaptations based on the experimentation with the X-means, k-Nearest Neighbors, and Q-learning algorithms, the building blocks of the solution. The network performance of the proposed framework is evaluated in a multi-agent environment implemented in MATLAB where it is compared with alternative RAT selection mechanisms

    Channel Access Management for Massive Cellular IoT Applications

    Get PDF
    As part of the steps taken towards improving the quality of life, many of everyday life activities as well as technological advancements are relying more and more on smart devices. In the future, it is expected that every electric device will be a smart device that can be connected to the internet. This gives rise to the new network paradigm known as the massive cellular IoT, where a large number of simple battery powered heterogeneous devices are collectively working for the betterment of humanity in all aspects. However, different from the traditional cellular based communication networks, IoT applications produce uplink-heavy data traffic that is composed of a large number of small data packets with different quality of service (QoS) requirements. These unique characteristics pose as a challenge to the current cellular channel access process and, hence, new and revolutionary access mechanisms are much needed. These access mechanisms need to be cost-effective, enable the support of massive number of devices, scalable, practical, and energy and radio resource efficient. Furthermore, due to the low computational capabilities of the devices, they cannot handle heavy networking intelligence and, thus, the designed channel access should be simple and light. Accordingly, in this research, we evaluate the suitability of the current channel access mechanism for massive applications and propose an energy efficient and resource preserving clustering and data aggregation solution. The proposed solution is tailored to the needs of future IoT applications. First, we recognize that for many anticipated cellular IoT applications, providing energy efficient and delay-aware access is crucial. However, in cellular networks, before devices transmit their data, they use a contention-based association protocol, known as random access channel procedure (RACH), which introduces extensive access delays and energy wastage as the number of contending devices increases. Modeling the performance of the RACH protocol is a challenging task due to the complexity of uplink transmission that exhibits a wide range of interference components; nonetheless, it is an essential process that helps determine the applicability of cellular IoT communication paradigm and shed light on the main challenges. Consequently, we develop a novel mathematical framework based on stochastic geometry to evaluate the RACH protocol and identify its limitations in the context of cellular IoT applications with a massive number of devices. To do so, we study the traditional cellular association process and establish a mathematical model for its association success probability. The model accounts for device density, spatial characteristics of the network, power control employed, and mutual interference among the devices. Our analysis and results highlight the shortcomings of the RACH protocol and give insights into the potentials brought on by employing power control techniques. Second, based on the analysis of the RACH procedure, we determine that, as the number of devices increases, the contention over the limited network radio resources increases, leading to network congestion. Accordingly, to avoid network congestion while supporting a large number of devices, we propose to use node clustering and data aggregation. As the number of supported devices increases and their QoS requirements become vast, optimizing node clustering and data aggregation processes becomes critical to be able to handle the many trade-offs that arise among different network performance metrics. Furthermore, for cost effectiveness, we propose that the data aggregator nodes be cellular devices and thus it is desirable to keep the number of aggregators to minimum such that we avoid congesting the RACH channel, while maximizing the number of successfully supported devices. Consequently, to tackle these issues, we explore the possibility of combining data aggregation and non-orthogonal multiple access (NOMA) where we propose a novel two-hop NOMA-enabled network architecture. Concepts from queuing theory and stochastic geometry are jointly exploited to derive mathematical expressions for different network performance metrics such as coverage probability, two-hop access delay, and the number of served devices per transmission frame. The established models characterize relations among various network metrics, and hence facilitate the design of two-stage transmission architecture. Numerical results demonstrate that the proposed solution improves the overall access delay and energy efficiency as compared to traditional OMA-based clustered networks. Last, we recognize that under the proposed two-hop network architecture, devices are subject to access point association decisions, i.e., to which access point a device associates plays a major role in determining the overall network performance and the perceived service by the devices. Accordingly, in the third part of the work, we consider the optimization of the two-hop network from the point of view of user association such that the number of QoS satisfied devices is maximized while minimizing the overall device energy consumption. We formulate the problem as a joint access point association, resources utilization, and energy efficient communication optimization problem that takes into account various networking factors such as the number of devices, number of data aggregators, number of available resource units, interference, transmission power limitation of the devices, aggregator transmission performance, and channel conditions. The objective is to show the usefulness of data aggregation and shed light on the importance of network design when the number of devices is massive. We propose a coalition game theory based algorithm, PAUSE, to transform the optimization problem into a simpler form that can be successfully solved in polynomial time. Different network scenarios are simulated to showcase the effectiveness of PAUSE and to draw observations on cost effective data aggregation enabled two-hop network design

    Low-resolution ADC receiver design, MIMO interference cancellation prototyping, and PHY secrecy analysis.

    Get PDF
    This dissertation studies three independent research topics in the general field of wireless communications. The first topic focuses on new receiver design with low-resolution analog-to-digital converters (ADC). In future massive multiple-input-multiple-output (MIMO) systems, multiple high-speed high-resolution ADCs will become a bottleneck for practical applications because of the hardware complexity and power consumption. One solution to this problem is to adopt low-cost low-precision ADCs instead. In Chapter II, MU-MIMO-OFDM systems only equipped with low-precision ADCs are considered. A new turbo receiver structure is proposed to improve the overall system performance. Meanwhile, ultra-low-cost communication devices can enable massive deployment of disposable wireless relays. In Chapter III, the feasibility of using a one-bit relay cluster to help a power-constrained transmitter for distant communication is investigated. Nonlinear estimators are applied to enable effective decoding. The second topic focuses prototyping and verification of a LTE and WiFi co-existence system, where the operation of LTE in unlicensed spectrum (LTE-U) is discussed. LTE-U extends the benefits of LTE and LTE Advanced to unlicensed spectrum, enabling mobile operators to offload data traffic onto unlicensed frequencies more efficiently and effectively. With LTE-U, operators can offer consumers a more robust and seamless mobile broadband experience with better coverage and higher download speeds. As the coexistence leads to considerable performance instability of both LTE and WiFi transmissions, the LTE and WiFi receivers with MIMO interference canceller are designed and prototyped to support the coexistence in Chapter IV. The third topic focuses on theoretical analysis of physical-layer secrecy with finite blocklength. Unlike upper layer security approaches, the physical-layer communication security can guarantee information-theoretic secrecy. Current studies on the physical-layer secrecy are all based on infinite blocklength. Nevertheless, these asymptotic studies are unrealistic and the finite blocklength effect is crucial for practical secrecy communication. In Chapter V, a practical analysis of secure lattice codes is provided
    corecore