379 research outputs found

    On modelling network coded ARQ-based channels

    Get PDF
    Network coding (NC) has been an attractive research topic in recent years as a means of offering a throughput improvement, especially in multicast scenarios. The throughput gain is achieved by introducing an algebraic method for combining multiple input streams of packets which are addressing one output port at an intermediate node. We present a practical implementation of network coding in conjunction with error control schemes, namely the Stop-and-Wait (SW) and Selective Repeat (SR) protocols. We propose a modified NC scheme and apply it at an intermediate SW ARQ-based link to reduce ARQ control signals at each transmission. We further extend this work to investigate the usefulness of NC in the Butterfly multicast network which adopts the SR ARQ protocol as an error control scheme. We validate our throughput analysis using a relatively recent discrete-event simulator, SimEvents®. The results show that the proposed scheme offers a throughput advantage of at least 50% over traditional SW ARQ, and that this is particularly noticeable in the presence of high error rates. In the multicast network, however, simulation results show that when compared with the traditional scheme, NC-SR ARQ can achieve a throughput gain of between 2% and 96% in a low bandwidth channel and up to 19% in a high bandwidth channel with errors

    Network Coding Over SATCOM: Lessons Learned

    Full text link
    Satellite networks provide unique challenges that can restrict users' quality of service. For example, high packet erasure rates and large latencies can cause significant disruptions to applications such as video streaming or voice-over-IP. Network coding is one promising technique that has been shown to help improve performance, especially in these environments. However, implementing any form of network code can be challenging. This paper will use an example of a generation-based network code and a sliding-window network code to help highlight the benefits and drawbacks of using one over the other. In-order packet delivery delay, as well as network efficiency, will be used as metrics to help differentiate between the two approaches. Furthermore, lessoned learned during the course of our research will be provided in an attempt to help the reader understand when and where network coding provides its benefits.Comment: Accepted to WiSATS 201

    Efficient ARQ retransmission schemes for two-way relay networks.

    Get PDF
    In this paper, we investigate different practical automatic repeat request (ARQ) retransmission protocols for two-way wireless relay networks based on network coding (NC). The idea of NC is applied to increase the achievable throughput for the exchange of information between two terminals through one relay. Using NC, throughput efficiency is significantly improved due to the reduction of the number of retransmissions. Particularly, two improved NC-based ARQ schemes are designed based on go-back-N and selective-repeat (SR) protocols. The analysis of throughput efficiency is then carried out to find the best retransmission strategy for different scenarios. It is shown that the combination of improved NC-based SR ARQ scheme in the broadcast phase and the traditional SR ARQ scheme in the multiple access phase achieves the highest throughput efficiency compared to the other combinations of ARQ schemes. Finally, simulation results are provided to verify the theoretical analysis

    On Fault Resilient Network-on-Chip for Many Core Systems

    Get PDF
    Rapid scaling of transistor gate sizes has increased the density of on-chip integration and paved the way for heterogeneous many-core systems-on-chip, significantly improving the speed of on-chip processing. The design of the interconnection network of these complex systems is a challenging one and the network-on-chip (NoC) is now the accepted scalable and bandwidth efficient interconnect for multi-processor systems on-chip (MPSoCs). However, the performance enhancements of technology scaling come at the cost of reliability as on-chip components particularly the network-on-chip become increasingly prone to faults. In this thesis, we focus on approaches to deal with the errors caused by such faults. The results of these approaches are obtained not only via time-consuming cycle-accurate simulations but also by analytical approaches, allowing for faster and accurate evaluations, especially for larger networks. Redundancy is the general approach to deal with faults, the mode of which varies according to the type of fault. For the NoC, there exists a classification of faults into transient, intermittent and permanent faults. Transient faults appear randomly for a few cycles and may be caused by the radiation of particles. Intermittent faults are similar to transient faults, however, differing in the fact that they occur repeatedly at the same location, eventually leading to a permanent fault. Permanent faults by definition are caused by wires and transistors being permanently short or open. Generally, spatial redundancy or the use of redundant components is used for dealing with permanent faults. Temporal redundancy deals with failures by re-execution or by retransmission of data while information redundancy adds redundant information to the data packets allowing for error detection and correction. Temporal and information redundancy methods are useful when dealing with transient and intermittent faults. In this dissertation, we begin with permanent faults in NoC in the form of faulty links and routers. Our approach for spatial redundancy adds redundant links in the diagonal direction to the standard rectangular mesh topology resulting in the hexagonal and octagonal NoCs. In addition to redundant links, adaptive routing must be used to bypass faulty components. We develop novel fault-tolerant deadlock-free adaptive routing algorithms for these topologies based on the turn model without the use of virtual channels. Our results show that the hexagonal and octagonal NoCs can tolerate all 2-router and 3-router faults, respectively, while the mesh has been shown to tolerate all 1-router faults. To simplify the restricted-turn selection process for achieving deadlock freedom, we devised an approach based on the channel dependency matrix instead of the state-of-the-art Duato's method of observing the channel dependency graph for cycles. The approach is general and can be used for the turn selection process for any regular topology. We further use algebraic manipulations of the channel dependency matrix to analytically assess the fault resilience of the adaptive routing algorithms when affected by permanent faults. We present and validate this method for the 2D mesh and hexagonal NoC topologies achieving very high accuracy with a maximum error of 1%. The approach is very general and allows for faster evaluations as compared to the generally used cycle-accurate simulations. In comparison, existing works usually assume a limited number of faults to be able to analytically assess the network reliability. We apply the approach to evaluate the fault resilience of larger NoCs demonstrating the usefulness of the approach especially compared to cycle-accurate simulations. Finally, we concentrate on temporal and information redundancy techniques to deal with transient and intermittent faults in the router resulting in the dropping and hence loss of packets. Temporal redundancy is applied in the form of ARQ and retransmission of lost packets. Information redundancy is applied by the generation and transmission of redundant linear combinations of packets known as random linear network coding. We develop an analytic model for flexible evaluation of these approaches to determine the network performance parameters such as residual error rates and increased network load. The analytic model allows to evaluate larger NoCs and different topologies and to investigate the advantage of network coding compared to uncoded transmissions. We further extend the work with a small insight to the problem of secure communication over the NoC. Assuming large heterogeneous MPSoCs with components from third parties, the communication is subject to active attacks in the form of packet modification and drops in the NoC routers. Devising approaches to resolve these issues, we again formulate analytic models for their flexible and accurate evaluations, with a maximum estimation error of 7%

    Network coding for computer networking

    Get PDF
    Conventional communication networks route data packets in a store-and-forward mode. A router buffers received packets and forwards them intact towards their intended destination. Network Coding (NC), however, generalises this method by allowing the router to perform algebraic operations on the packets before forwarding them. The purpose of NC is to improve the network performance to achieve its maximum capacity also known as max-flow min-cut bound. NC has become very well established in the field of information theory, however, practical implementations in real-world networks is yet to be explored. In this thesis, new implementations of NC are brought forward. The effect of NC on flow error control protocols and queuing over computer networks is investigated by establishing and designing a mathematical and simulation framework. One goal of such investigation is to understand how NC technique can reduce the number of packets required to acknowledge the reception of those sent over the network while error-control schemes are employed. Another goal is to control the network queuing stability by reducing the number of packets required to convey a set of information. A custom-built simulator based on SimEvents® has been developed in order to model several scenarios within this approach. The work in this thesis is divided into two key parts. The objective of the first part is to study the performance of communication networks employing error control protocols when NC is adopted. In particular, two main Automatic Repeat reQuest (ARQ) schemes are invoked, namely the Stop-and-Wait (SW) and Selective Repeat (SR) ARQ. Results show that in unicast point-to point communication, the proposed NC scheme offers an increase in the throughput over traditional SW ARQ between 2.5% and 50.5% at each link, with negligible decoding delay. Additionally, in a Butterfly network, SR ARQ employing NC achieves a throughput gain between 22% and 44% over traditional SR ARQ when the number of incoming links to the intermediate node varies between 2 and 5. Moreover, in an extended Butterfly network, NC offered a throughput increase of up to 48% under an error-free scenario and 50% in the presence of errors. Despite the extensive research on synchronous NC performance in various fields, little has been said about its queuing behaviour. One assumption is that packets are served following a Poisson distribution. The packets from different streams are coded prior to being served and then exit through only one stream. This study determines the arrival distribution that coded packets follow at the serving node. In general this leads to study general queuing systems of type G/M/1. Hence, the objective of the second part of this study is twofold. The study aims to determine the distribution of the coded packets and estimate the waiting time faced by coded packets before their complete serving process. Results show that NC brings a new solution for queuing stability as evidenced by the small waiting time the coded packets spend in the intermediate node queue before serving. This work is further enhanced by studying the server utilization in traditional routing and NC scenarios. NC-based M/M/1 with finite capacity K is also analysed to investigate packet loss probability for both scenarios. Based on the results achieved, the utilization of NC in error-prone and long propagation delay networks is recommended. Additionally, since the work provides an insightful prediction of particular networks queuing behaviour, employing synchronous NC can bring a solution for systems’ stability with packet-controlled sources and limited input buffers

    Collaborative HARQ Schemes for Cooperative Diversity Communications in Wireless Networks

    Get PDF
    Wireless technology is experiencing spectacular developments, due to the emergence of interactive and digital multimedia applications as well as rapid advances in the highly integrated systems. For the next-generation mobile communication systems, one can expect wireless connectivity between any devices at any time and anywhere with a range of multimedia contents. A key requirement in such systems is the availability of high-speed and robust communication links. Unfortunately, communications over wireless channels inherently suffer from a number of fundamental physical limitations, such as multipath fading, scarce radio spectrum, and limited battery power supply for mobile devices. Cooperative diversity (CD) technology is a promising solution for future wireless communication systems to achieve broader coverage and to mitigate wireless channels’ impairments without the need to use high power at the transmitter. In general, cooperative relaying systems have a source node multicasting a message to a number of cooperative relays, which in turn resend a processed version message to an intended destination node. The destination node combines the signal received from the relays, and takes into account the source’s original signal to decode the message. The CD communication systems exploit two fundamental features of the wireless medium: its broadcast nature and its ability to achieve diversity through independent channels. A variety of relaying protocols have been considered and utilized in cooperative wireless networks. Amplify and forward (AAF) and decode and forward (DAF) are two popular protocols, frequently used in the cooperative systems. In the AAF mode, the relay amplifies the received signal prior to retransmission. In the DAF mode, the relay fully decodes the received signal, re-encodes and forwards it to the destination. Due to the retransmission without decoding, AAF has the shortcoming that noise accumulated in the received signal is amplified at the transmission. DAF suffers from decoding errors that can lead to severe error propagation. To further enhance the quality of service (QoS) of CD communication systems, hybrid Automatic Repeat-reQuest (HARQ) protocols have been proposed. Thus, if the destination requires an ARQ retransmission, it could come from one of relays rather than the source node. This thesis proposes an improved HARQ scheme with an adaptive relaying protocol (ARP). Focusing on the HARQ as a central theme, we start by introducing the concept of ARP. Then we use it as the basis for designing three types of HARQ schemes, denoted by HARQ I-ARP, HARQ II-ARP and HARQ III-ARP. We describe the relaying protocols, (both AAF and DAF), and their operations, including channel access between the source and relay, the feedback scheme, and the combining methods at the receivers. To investigate the benefits of the proposed HARQ scheme, we analyze its frame error rate (FER) and throughput performance over a quasi-static fading channel. We can compare these with the reference methods, HARQ with AAF (HARQ-AAF) and HARQ with perfect distributed turbo codes (DTC), for which correct decoding is always assumed at the relay (HARQ-perfect DTC). It is shown that the proposed HARQ-ARP scheme can always performs better than the HARQ-AAF scheme. As the signal-to-noise ratio (SNR) of the channel between the source and relay increases, the performance of the proposed HARQ-ARP scheme approaches that of the HARQ-perfect DTC scheme

    Recent Trends and Considerations for High Speed Data in Chips and System Interconnects

    Get PDF
    This paper discusses key issues related to the design of large processing volume chip architectures and high speed system interconnects. Design methodologies and techniques are discussed, where recent trends and considerations are highlighted
    corecore