538 research outputs found

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    Coded Random Access Technique Based on Repetition Codes for Prioritizing Emergency Communication

    Get PDF
    This research uses repetition codes based on Coded Random Access (CRA) to support Internet of Things (IoT) to give priority to emergency commutations in super-dense networks. Degree distribution for emergency group and general group are obtained with extrinsic information transfer (EXIT) analysis to achieve small error performance shown by the very small gap between emergency group curve and general group curve. This research also evaluates performance by observing throughput and packet-loss rate (PLR) parameters from every groups. Offered traffic in PLR  for emergency group user is G= 0,7 packet/slot without fading and G= 0,65 packet/slot with fading, while for public group is G=0,699 packet/slot without fading and G=0,42 packet/slot with fading. Peak throughput for emergency group is G= 0,737 packet/slot without fading and G= 0,729 packet/slot with fading. Peak Throughput for public group is G= 0,699 packet/slot without fading and G=0,685 packet/slot with fading. Throughput values of emergency group are higher than those of the general group, indicating successful process of giving priority for emergency group

    Robust QUIC: integrating practical coding in a low latency transport protocol

    Get PDF
    We introduce rQUIC, an integration of the QUIC protocol and a coding module. rQUIC has been designed to feature different coding/decoding schemes and is implemented in go language. We conducted an extensive measurement campaign to provide a thorough characterization of the proposed solution. We compared the performance of rQUIC with that of the original QUIC protocol for different underlying network conditions as well as different traffic patterns. Our results show that rQUIC not only yields a relevant performance gain (shorter delays), especially when network conditions worsen, but also ensures a more predictable behavior. For bulk transfer (long flows), the delay reduction almost reached 70% when the frame error rate was 5%, while under similar conditions, the gain for short flows (web navigation) was approximately 55%. In the case of video streaming, the QoE gain (p1203 metric) was, approximately, 50%.This work was supported in part by the Basque Government through the Elkartek Program under the Hodei-x Project under Agreement KK-2021/00049; in part by the Spanish Government through the Ministerio de EconomĂ­a y Competitividad, Fondo Europeo de Desarrollo Regional (FEDER) through the Future Internet Enabled Resilient smart CitiEs (FIERCE) under Grant RTI2018-093475-AI00; and in part by the Industrial Doctorates Program of the University of Cantabria under Grant Call 2019

    Modern Random Access for Satellite Communications

    Full text link
    The present PhD dissertation focuses on modern random access (RA) techniques. In the first part an slot- and frame-asynchronous RA scheme adopting replicas, successive interference cancellation and combining techniques is presented and its performance analysed. The comparison of both slot-synchronous and asynchronous RA at higher layer, follows. Next, the optimization procedure, for slot-synchronous RA with irregular repetitions, is extended to the Rayleigh block fading channel. Finally, random access with multiple receivers is considered.Comment: PhD Thesis, 196 page

    Designing Flexible, Energy Efficient and Secure Wireless Solutions for the Internet of Things

    Full text link
    The Internet of Things (IoT) is an emerging concept where ubiquitous physical objects (things) consisting of sensor, transceiver, processing hardware and software are interconnected via the Internet. The information collected by individual IoT nodes is shared among other often heterogeneous devices and over the Internet. This dissertation presents flexible, energy efficient and secure wireless solutions in the IoT application domain. System design and architecture designs are discussed envisioning a near-future world where wireless communication among heterogeneous IoT devices are seamlessly enabled. Firstly, an energy-autonomous wireless communication system for ultra-small, ultra-low power IoT platforms is presented. To achieve orders of magnitude energy efficiency improvement, a comprehensive system-level framework that jointly optimizes various system parameters is developed. A new synchronization protocol and modulation schemes are specified for energy-scarce ultra-small IoT nodes. The dynamic link adaptation is proposed to guarantee the ultra-small node to always operate in the most energy efficiency mode, given an operating scenario. The outcome is a truly energy-optimized wireless communication system to enable various new applications such as implanted smart-dust devices. Secondly, a configurable Software Defined Radio (SDR) baseband processor is designed and shown to be an efficient platform on which to execute several IoT wireless standards. It is a custom SIMD execution model coupled with a scalar unit and several architectural optimizations: streaming registers, variable bitwidth, dedicated ALUs, and an optimized reduction network. Voltage scaling and clock gating are employed to further reduce the power, with a more than a 100% time margin reserved for reliable operation in the near-threshold region. Two upper bound systems are evaluated. A comprehensive power/area estimation indicates that the overhead of realizing SDR flexibility is insignificant. The benefit of baseband SDR is quantified and evaluated. To further augment the benefits of a flexible baseband solution and to address the security issue of IoT connectivity, a light-weight Galois Field (GF) processor is proposed. This processor enables both energy-efficient block coding and symmetric/asymmetric cryptography kernel processing for a wide range of GF sizes (2^m, m = 2, 3, ..., 233) and arbitrary irreducible polynomials. Program directed connections among primitive GF arithmetic units enable dynamically configured parallelism to efficiently perform either four-way SIMD GF operations, including multiplicative inverse, or a long bit-width GF product in a single cycle. This demonstrates the feasibility of a unified architecture to enable error correction coding flexibility and secure wireless communication in the low power IoT domain.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137164/1/yajchen_1.pd

    IoT for measurements and measurements for IoT

    Get PDF
    The thesis is framed in the broad strand of the Internet of Things, providing two parallel paths. On one hand, it deals with the identification of operational scenarios in which the IoT paradigm could be innovative and preferable to pre-existing solutions, discussing in detail a couple of applications. On the other hand, the thesis presents methodologies to assess the performance of technologies and related enabling protocols for IoT systems, focusing mainly on metrics and parameters related to the functioning of the physical layer of the systems

    Towards Massive Machine Type Communications in Ultra-Dense Cellular IoT Networks: Current Issues and Machine Learning-Assisted Solutions

    Get PDF
    The ever-increasing number of resource-constrained Machine-Type Communication (MTC) devices is leading to the critical challenge of fulfilling diverse communication requirements in dynamic and ultra-dense wireless environments. Among different application scenarios that the upcoming 5G and beyond cellular networks are expected to support, such as eMBB, mMTC and URLLC, mMTC brings the unique technical challenge of supporting a huge number of MTC devices, which is the main focus of this paper. The related challenges include QoS provisioning, handling highly dynamic and sporadic MTC traffic, huge signalling overhead and Radio Access Network (RAN) congestion. In this regard, this paper aims to identify and analyze the involved technical issues, to review recent advances, to highlight potential solutions and to propose new research directions. First, starting with an overview of mMTC features and QoS provisioning issues, we present the key enablers for mMTC in cellular networks. Along with the highlights on the inefficiency of the legacy Random Access (RA) procedure in the mMTC scenario, we then present the key features and channel access mechanisms in the emerging cellular IoT standards, namely, LTE-M and NB-IoT. Subsequently, we present a framework for the performance analysis of transmission scheduling with the QoS support along with the issues involved in short data packet transmission. Next, we provide a detailed overview of the existing and emerging solutions towards addressing RAN congestion problem, and then identify potential advantages, challenges and use cases for the applications of emerging Machine Learning (ML) techniques in ultra-dense cellular networks. Out of several ML techniques, we focus on the application of low-complexity Q-learning approach in the mMTC scenarios. Finally, we discuss some open research challenges and promising future research directions.Comment: 37 pages, 8 figures, 7 tables, submitted for a possible future publication in IEEE Communications Surveys and Tutorial

    Multiple Access for Massive Machine Type Communications

    Get PDF
    The internet we have known thus far has been an internet of people, as it has connected people with one another. However, these connections are forecasted to occupy only a minuscule of future communications. The internet of tomorrow is indeed: the internet of things. The Internet of Things (IoT) promises to improve all aspects of life by connecting everything to everything. An enormous amount of effort is being exerted to turn these visions into a reality. Sensors and actuators will communicate and operate in an automated fashion with no or minimal human intervention. In the current literature, these sensors and actuators are referred to as machines, and the communication amongst these machines is referred to as Machine to Machine (M2M) communication or Machine-Type Communication (MTC). As IoT requires a seamless mode of communication that is available anywhere and anytime, wireless communications will be one of the key enabling technologies for IoT. In existing wireless cellular networks, users with data to transmit first need to request channel access. All access requests are processed by a central unit that in return either grants or denies the access request. Once granted access, users' data transmissions are non-overlapping and interference free. However, as the number of IoT devices is forecasted to be in the order of hundreds of millions, if not billions, in the near future, the access channels of existing cellular networks are predicted to suffer from severe congestion and, thus, incur unpredictable latencies in the system. On the other hand, in random access, users with data to transmit will access the channel in an uncoordinated and probabilistic fashion, thus, requiring little or no signalling overhead. However, this reduction in overhead is at the expense of reliability and efficiency due to the interference caused by contending users. In most existing random access schemes, packets are lost when they experience interference from other packets transmitted over the same resources. Moreover, most existing random access schemes are best-effort schemes with almost no Quality of Service (QoS) guarantees. In this thesis, we investigate the performance of different random access schemes in different settings to resolve the problem of the massive access of IoT devices with diverse QoS guarantees. First, we take a step towards re-designing existing random access protocols such that they are more practical and more efficient. For many years, researchers have adopted the collision channel model in random access schemes: a collision is the event of two or more users transmitting over the same time-frequency resources. In the event of a collision, all the involved data is lost, and users need to retransmit their information. However, in practice, data can be recovered even in the presence of interference provided that the power of the signal is sufficiently larger than the power of the noise and the power of the interference. Based on this, we re-define the event of collision as the event of the interference power exceeding a pre-determined threshold. We propose a new analytical framework to compute the probability of packet recovery failure inspired by error control codes on graph. We optimize the random access parameters based on evolution strategies. Our results show a significant improvement in performance in terms of reliability and efficiency. Next, we focus on supporting the heterogeneous IoT applications and accommodating their diverse latency and reliability requirements in a unified access scheme. We propose a multi-stage approach where each group of applications transmits in different stages with different probabilities. We propose a new analytical framework to compute the probability of packet recovery failure for each group in each stage. We also optimize the random access parameters using evolution strategies. Our results show that our proposed scheme can outperform coordinated access schemes of existing cellular networks when the number of users is very large. Finally, we investigate random non-orthogonal multiple access schemes that are known to achieve a higher spectrum efficiency and are known to support higher loads. In our proposed scheme, user detection and channel estimation are carried out via pilot sequences that are transmitted simultaneously with the user's data. Here, a collision event is defined as the event of two or more users selecting the same pilot sequence. All collisions are regarded as interference to the remaining users. We first study the distribution of the interference power and derive its expression. Then, we use this expression to derive simple yet accurate analytical bounds on the throughput and outage probability of the proposed scheme. We consider both joint decoding as well as successive interference cancellation. We show that the proposed scheme is especially useful in the case of short packet transmission

    Feasibility study of 5G low-latency packet radio communications without preambles

    Get PDF
    This thesis deals with the feasibility of having lower latency for radio communication of short packets, which is the major traffic in the fifth generation (5G) of cellular systems. We will examine the possibility of using turbo synchronization instead of using a long preamble, which is needed for Data-Aided (DA) synchronization. The idea behind this is that short packets are required in low-latency applications. The overhead of preambles is very significant in case of short packets. Turbo synchronization allows to work with short or null preambles. The simulations will be run for a turbo synchronizer which has been implemented according to the Expectation Maximization (EM) formulation of the problem. The simulation results show that the implemented turbo synchronizer outperforms or attains the DA synchronizer in terms of reliability, accuracy and acquisition range for carrier phase synchronization. It means that the idea of eliminating the preamble from the short packet seems practical. The only downward is that there is a packet size limitation for the effective functionality of turbo synchronizer. Simulations indicate that the number of transmitted symbols should be higher than 128 coded symbols
    • …
    corecore