37 research outputs found

    Performance characterization and transmission schemes for instantly decodable network coding in wireless broadcast

    Get PDF
    We consider broadcasting a block of packets to multiple wireless receivers under random packet erasures using instantly decodable network coding (IDNC). The sender first broadcasts each packet uncoded once, then generates coded packets according to receivers’ feedback about their missing packets. We focus on strict IDNC (S-IDNC), where each coded packet includes at most one missing packet of every receiver. But, we will also study its relation with generalized IDNC (G-IDNC), where this condition is relaxed. We characterize two fundamental performance limits of S-IDNC: (1) the number of transmissions to complete the broadcast, which measures throughput and (2) average packet decoding delay, which measures how fast each packet is decoded at each receiver on average. We derive a closed-form expression for the expected minimum number of transmissions in terms of the number of packets and receivers and the erasure probability. We prove that it is NP-hard to minimize the average packet decoding delay of S-IDNC. We also prove that the graph models of S- and G-IDNC share the same chromatic number. Next, we design efficient S-IDNC transmission schemes and coding algorithms with full/intermittent receiver feedback. We present simulation results to corroborate the developed theory and compare our schemes with existing ones

    Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications

    Get PDF
    From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified

    Throughput and Delay Optimization of Linear Network Coding in Wireless Broadcast

    No full text
    Linear network coding (LNC) is able to achieve the optimal throughput of packet-level wireless broadcast, where a sender wishes to broadcast a set of data packets to a set of receivers within its transmission range through lossy wireless links. But the price is a large delay in the recovery of individual data packets due to network decoding, which may undermine all the benefits of LNC. However, packet decoding delay minimization and its relation to throughput maximization have not been well understood in the network coding literature. Motivated by this fact, in this thesis we present a comprehensive study on the joint optimization of throughput and average packet decoding delay (APDD) for LNC in wireless broadcast. To this end, we reveal the fundamental performance limits of LNC and study the performance of three major classes of LNC techniques, including instantly decodable network coding (IDNC), generation-based LNC, and throughput-optimal LNC (including random linear network coding (RLNC)). Various approaches are taken to accomplish the study, including 1) deriving performance bounds, 2) establishing and modelling optimization problems, 3) studying the hardness of the optimization problems and their approximation, 4) developing new optimal and heuristic techniques that take into account practical concerns such as receiver feedback frequency and computational complexity. Key contributions of this thesis include: - a necessary and sufficient condition for LNC to achieve the optimal throughput of wireless broadcast; - the NP-hardness of APDD minimization; - lower bounds of the expected APDD of LNC under random packet erasures; - the APDD-approximation ratio of throughput-optimal LNC, which has a value of between 4/3 and 2. In particular, the ratio of RLNC is exactly 2; - a novel throughput-optimal, APDD-approximation, and implementation-friendly LNC technique; - an optimal implementation of strict IDNC that is robust to packet erasures; - a novel generation-based LNC technique that generalizes some of the existing LNC techniques and enables tunable throughput-delay tradeoffs

    Network Coding Channel Virtualization Schemes for Satellite Multicast Communications

    Full text link
    In this paper, we propose two novel schemes to solve the problem of finding a quasi-optimal number of coded packets to multicast to a set of independent wireless receivers suffering different channel conditions. In particular, we propose two network channel virtualization schemes that allow for representing the set of intended receivers in a multicast group to be virtualized as one receiver. Such approach allows for a transmission scheme not only adapted to per-receiver channel variation over time, but to the network-virtualized channel representing all receivers in the multicast group. The first scheme capitalizes on a maximum erasure criterion introduced via the creation of a virtual worst per receiver per slot reference channel of the network. The second scheme capitalizes on a maximum completion time criterion by the use of the worst performing receiver channel as a virtual reference to the network. We apply such schemes to a GEO satellite scenario. We demonstrate the benefits of the proposed schemes comparing them to a per-receiver point-to-point adaptive strategy

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    Design and analysis of network coding schemes for efficient fronthaul offloading of fog-radio access networks

    Full text link
    In the era of the Internet of Things (IoT), everything will be connected. Smart homes and cities, connected cars, smart agriculture, wearable technologies, smart healthcare, smart sport, and fitness are all becoming a reality. However, the current cloud architecture cannot manage the tremendous number of connected devices and skyrocketing data traffic while providing the speeds promised by 5G and beyond. Centralised cloud data centres are physically too far from where the data originate (edge of the network), inevitably leading to data transmission speeds that are too slow for delay-sensitive applications. Thus, researchers have proposed fog architecture as a solution to the ever-increasing number of connected devices and data traffic. The main idea of fog architecture is to bring content physically closer to end users, thus reducing data transmission times. This thesis considers a type of fog architecture in which smart end devices have storage and processing capabilities and can communicate and collaborate with each other. The major goal of this thesis is to develop methods of efficiently governing communication and collaboration between smart end devices so that their requests to upper network layers are minimised. This is achieved by incorporating principles from graph theory, network coding and machine learning to model the problem and design efficient network-coded scheduling algorithms to further enhance achieved performance. By maximising end users' self-sufficiency, the load on the system is decreased and its capacity increased. This will allow the central processing unit to manage more devices which is vital, given that more than 29 billion devices will connect to the infrastructure by 2023 \cite{Cisco1823}. Specifically, given that the limitations of the smart end devices and the system in general lead to various communication conflicts, a novel network coding graph is developed that takes into account all possible conflicts and enables the search for an efficient feasible solution. The thesis designs heuristic algorithms that search for the solution over the novel network coding graph, investigates the complexity of the proposed algorithms, and shows the offloading strategy's asymptotic optimality. Although the main aim of this work is to decrease the involvement of upper fog layers in serving smart end devices, it also takes into account how much energy end devices would use during collaborations. Unfortunately, a higher system capacity comes at the price of more energy spent by smart end devices; thus, service providers' interests and end users' interests are conflicting. Finally, this thesis investigates how multihop communication between end devices influences the offloading of upper fog layers. Smart end devices are equipped with machine learning capabilities that allow them to find efficient paths to their peers, further improving offloading. In conclusion, the work in this thesis shows that by smartly designing and scheduling communication between end devices, it is possible to significantly reduce the load on the system, increase its capacity and achieve fast transmissions between end devices, allowing them to run latency-critical applications

    Analysis and optimization of sparse random linear network coding for reliable multicast services

    Get PDF
    Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user’s computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterise the performance of users targeted by ultra-reliable layered multicast services. The proposed modelling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimise the complexity of the RLNC decoder by jointly optimising the transmission parameters and the sparsity of the code. The designed optimisation framework also ensures service guarantees to predetermined fractions of users. The performance of the proposed optimisation framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video services

    Completion Delay of Random Linear Network Coding in Full-Duplex Relay Networks

    Full text link
    As the next-generation wireless networks thrive, full-duplex and relaying techniques are combined to improve the network performance. Random linear network coding (RLNC) is another popular technique to enhance the efficiency and reliability in wireless communications. In this paper, in order to explore the potential of RLNC in full-duplex relay networks, we investigate two fundamental perfect RLNC schemes and theoretically analyze their completion delay performance. The first scheme is a straightforward application of conventional perfect RLNC studied in wireless broadcast, so it involves no additional process at the relay. Its performance serves as an upper bound among all perfect RLNC schemes. The other scheme allows sufficiently large buffer and unconstrained linear coding at the relay. It attains the optimal performance and serves as a lower bound among all RLNC schemes. For both schemes, closed-form formulae to characterize the expected completion delay at a single receiver as well as for the whole system are derived. Numerical results are also demonstrated to justify the theoretical characterizations, and compare the two new schemes with the existing one

    Optimized protection of streaming media authenticity

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore