30 research outputs found

    On the use of erasure codes in unreliable data networks

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 62-64).Modern data networks are approaching the state where a large number of independent and heterogeneous paths are available between a source node and destination node. In this work, we explore the case where each path has an independent level of reliability characterized by a probability of path failure. Instead of simply repeating the message across all the paths, we use the path diversity to achieve reliable transmission of messages by using a coding technique known as an erasure correcting code. We develop a model of the network and present an analysis of the system that invokes the Central Limit Theorem to approximate the total number of bits received from all the paths. We then optimize the number of bits to send over each path in order to maximize the probability of receiving a sufficient number of bits at the destination to reconstruct the message using the erasure correcting code. Three cases are investigated: when the paths are very reliable, when the paths are very unreliable, and when the paths have a probability of failure within an interval surrounding 0.5. We present an overview of the mechanics of an erasure coding process applicable to packet-based transactions. Finally, as avenues for further research, we discuss several applications of erasure coding in networks that have only a single path between source and destination: for latency reduction in interactive web sessions; as a transport layer for critical messaging; and an application layer protocol for high-bandwidth networks.by Salil Parikh.S.M

    Utilization of forward error correction (FEC) techniques with extensible markup language (XML) schema-based binary compression (XSBC) technology

    Get PDF
    In order to plug-in current open sourced, open standard Java programming technology into the building blocks of the US Navy's ForceNet, first, stove-piped systems need to be made extensible to other pertinent applications and then a new paradigm of adopting extensible and cross-platform open technologies will begin to bridge gaps with old and new weapons systems. The battle-space picture in real time and with as much detail, or as little detail needed is now a current vital requirement. Access to this information via wireless laptop technology is here now. Transmission of data to increase the resolution of that battle-space snapshot will invariably be through noisy links. Noisy links such as found in the shallow water littoral regions of interest will be where Autonomous Underwater and Unmanned Underwater Vehicles (AUVs/UUVs) are gathering intelligence for the sea warrior in need of that intelligence. The battle-space picture built from data transmitted within these noisy and unpredictable acoustic regions demands efficiency and reliability features abstract to the user. To realize this efficiency Extensible Markup Language (XML) Schema-based Binary Compression (XSBC), in combination with Vandermode-based Forward Error Correction (FEC) erasure codes, offer the qualities of efficient streaming of plain text XML documents in a highly compressed form, and a data self-healing capability should there be loss of data during transmission in unpredictable transmission mediums. Both the XSBC and FEC libraries detailed in this thesis are open sourced Java Application Program Interfaces (APIs) that can be readily adapted for extensible, cross-platform applications that will be enhanced by these desired features to add functional capability to ForceNet for the sea warrior to access on demand, at sea and in real-time. These features will be presented in the Autonomous Underwater Vehicle (AUV) Workbench (AUVW) Java-based application that will become a valuable tool for warriors involved with Undersea Warfare (UW).http://archive.org/details/utilizationoffor109451247Lieutenant, United States NavyApproved for public release; distribution is unlimited

    Frame-based multiple-description video coding with extended orthogonal filter banks

    Get PDF
    We propose a frame-based multiple-description video coder. The analysis filter bank is the extension of an orthogonal filter bank which computes the spatial polyphase components of the original video frames. The output of the filter bank is a set of video sequences which can be compressed with a standard coder. The filter bank design is carried out by taking into account two important requirements for video coding, namely, the fact that the dual synthesis filter bank is FIR, and that loss recovery does not enhance the quantization error. We give explicit results about the required properties of the redundant channel filter and the reconstruction error bounds in case of packet errors. We show that the proposed scheme has good error robustness to losses and good performance, both in terms of objective and visual quality, when compared to single description and other multiple description video coders based on spatial subsampling. PSNR gains of 5 dB or more are typical for packet loss probability as low as 5%

    Adaptive delay-constrained internet media transport

    Get PDF
    Reliable transport layer Internet protocols do not satisfy the requirements of packetized, real-time multimedia streams. The available thesis motivates and defines predictable reliability as a novel, capacity-approaching transport paradigm, supporting an application-specific level of reliability under a strict delay constraint. This paradigm is being implemented into a new protocol design -- the Predictably Reliable Real-time Transport protocol (PRRT). In order to predictably achieve the desired level of reliability, proactive and reactive error control must be optimized under the application\u27s delay constraint. Hence, predictably reliable error control relies on stochastic modeling of the protocol response to the modeled packet loss behavior of the network path. The result of the joined modeling is periodically evaluated by a reliability control policy that validates the protocol configuration under the application constraints and under consideration of the available network bandwidth. The adaptation of the protocol parameters is formulated into a combinatorial optimization problem that is solved by a fast search algorithm incorporating explicit knowledge about the search space. Experimental evaluation of PRRT in real Internet scenarios demonstrates that predictably reliable transport meets the strict QoS constraints of high-quality, audio-visual streaming applications.Zuverlässige Internet-Protokolle auf Transport-Layer erfüllen nicht die Anforderungen paketierter Echtzeit-Multimediaströme. Die vorliegende Arbeit motiviert und definiert Predictable Reliability als ein neuartiges, kapazitäterreichendes Transport-Paradigma, das einen anwendungsspezifischen Grad an Zuverlässigkeit unter strikter Zeitbegrenzung unterstützt. Dieses Paradigma wird in ein neues Protokoll-Design implementiert -- das Predictably Reliable Real-time Transport Protokoll (PRRT). Um prädizierbar einen gewünschten Grad an Zuverlässigkeit zu erreichen, müssen proaktive und reaktive Maßnahmen zum Fehlerschutz unter der Zeitbegrenzung der Anwendung optimiert werden. Daher beruht Fehlerschutz mit Predictable Reliability auf der stochastischen Modellierung des Protokoll-Verhaltens unter modelliertem Paketverlust-Verhalten des Netzwerkpfades. Das Ergebnis der kombinierten Modellierung wird periodisch durch eine Reliability Control Strategie ausgewertet, die die Konfiguration des Protokolls unter den Begrenzungen der Anwendung und unter Berücksichtigung der verfügbaren Netzwerkbandbreite validiert. Die Adaption der Protokoll-Parameter wird durch ein kombinatorisches Optimierungsproblem formuliert, welches von einem schnellen Suchalgorithmus gelöst wird, der explizites Wissen über den Suchraum einbezieht. Experimentelle Auswertung von PRRT in realen Internet-Szenarien demonstriert, dass Transport mit Predictable Reliability die strikten Auflagen hochqualitativer, audiovisueller Streaming-Anwendungen erfüllt

    A random access MAC protocol for MPR satellite networks

    Get PDF
    Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores, pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaRandom access approaches for Low Earth Orbit (LEO) satellite networks are usually incompatible with the Quality of Service (QoS) requirements of multimedia tra c, especially when hand-held devices must operate with very low power. Cross-Layered optimization architectures, combined with Multipacket Reception (MPR)schemes are a good choice to enhance the overall performance of a wireless system. Hybrid Network-assisted Diversity Multiple Access (H-NDMA) protocol, exhibits high energy e ciency, with MPR capability, but its use with satellites is limited by the high round trip time. This protocol was adapted to satellites, in Satellite-NDMA, but it required a pre-reservation mechanism that introduces a signi cant delay. This dissertation proposes a random access protocol that uses H-NDMA, for Low Earth Orbit (LEO) satellite networks, named Satellite Random-NDMA (SR-NDMA). The protocol addresses the problem inherent to satellite networks (large round trip time and signi cant energy consumption) de ning a hybrid approach with an initial random access plus possible additional scheduled retransmissions. An MPR receiver combines the multiple copies received, gradually reducing the error rate. Analytical performance models are proposed for the throughput, delay, jitter and energy e ciency considering nite queues at the terminals. It is also addressed the energy e ciency optimization, where the system parameters are calculated to guarantee the QoS requirements. The proposed system's performance is evaluated for a Single-Carrier with Frequency Domain Equalization (SC-FDE) receiver. Results show that the proposed system is energy e cient and can provide enough QoS to support services such as video telephony

    Packet Loss in Terrestrial Wireless and Hybrid Networks

    Get PDF
    The presence of both a geostationary satellite link and a terrestrial local wireless link on the same path of a given network connection is becoming increasingly common, thanks to the popularity of the IEEE 802.11 protocol. The most common situation where a hybrid network comes into play is having a Wi-Fi link at the network edge and the satellite link somewhere in the network core. Example of scenarios where this can happen are ships or airplanes where Internet connection on board is provided through a Wi-Fi access point and a satellite link with a geostationary satellite; a small office located in remote or isolated area without cabled Internet access; a rescue team using a mobile ad hoc Wi-Fi network connected to the Internet or to a command centre through a mobile gateway using a satellite link. The serialisation of terrestrial and satellite wireless links is problematic from the point of view of a number of applications, be they based on video streaming, interactive audio or TCP. The reason is the combination of high latency, caused by the geostationary satellite link, and frequent, correlated packet losses caused by the local wireless terrestrial link. In fact, GEO satellites are placed in equatorial orbit at 36,000 km altitude, which takes the radio signal about 250 ms to travel up and down. Satellite systems exhibit low packet loss most of the time, with typical project constraints of 10−8 bit error rate 99% of the time, which translates into a packet error rate of 10−4, except for a few days a year. Wi-Fi links, on the other hand, have quite different characteristics. While the delay introduced by the MAC level is in the order of the milliseconds, and is consequently too small to affect most applications, its packet loss characteristics are generally far from negligible. In fact, multipath fading, interference and collisions affect most environments, causing correlated packet losses: this means that often more than one packet at a time is lost for a single fading even

    Efficient and Effective Schemes for Streaming Media Delivery

    Get PDF
    The rapid expansion of the Internet and the increasingly wide deployment of wireless networks provide opportunities to deliver streaming media content to users at anywhere, anytime. To ensure good user experience, it is important to battle adversary effects, such as delay, loss and jitter. In this thesis, we first study efficient loss recovery schemes, which require pure XOR operations. In particular, we propose a novel scheme capable of recovering up to 3 packet losses, and it has the lowest complexity among all known schemes. We also propose an efficient algorithm for array codes decoding, which achieves significant throughput gain and energy savings over conventional codes. We believe these schemes are applicable to streaming applications, especially in wireless environments. We then study quality adaptation schemes for client buffer management. Our control-theoretic approach results in an efficient online rate control algorithm with analytically tractable performance. Extensive experimental results show that three goals are achieved: fast startup, continuous playback in the face of severe congestion, and maximal quality and smoothness over the entire streaming session. The scheme is later extended to streaming with limited quality levels, which is then directly applicable to existing systems

    Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission

    Get PDF
    Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities. Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding

    Investigation of Cooperation Technologies in Heterogeneous Wireless Networks

    Get PDF
    Heterogeneous wireless networks based on varieties of radio access technologies (RATs) and standards will coexist in the future. In order to exploit this potential multiaccess gain, it is required that different RATs are managed in a cooperative fashion. This paper proposes two advanced functional architecture supporting the functionalities of interworking between WiMAX and 3GPP networks as a specific case: Radio Control Server-(RCS-) and Access Point-(AP-) based centralized architectures. The key technologies supporting the interworking are then investigated, including proposing the Generic Link Layer (GLL) and researching the multiradio resource management (MRRM) mechanisms. This paper elaborates on these topics, and the corresponding solutions are proposed with preliminary results
    corecore