19 research outputs found

    An information-theoretic view of network management

    Get PDF
    We present an information-theoretic framework for network management for recovery from nonergodic link failures. Building on recent work in the field of network coding, we describe the input-output relations of network nodes in terms of network codes. This very general concept of network behavior as a code provides a way to quantify essential management information as that needed to switch among different codes (behaviors) for different failure scenarios. We compare two types of recovery schemes, receiver-based and network-wide, and consider two formulations for quantifying network management. The first is a centralized formulation where network behavior is described by an overall code determining the behavior of every node, and the management requirement is taken as the logarithm of the number of such codes that the network may switch among. For this formulation, we give bounds, many of which are tight, on management requirements for various network connection problems in terms of basic parameters such as the number of source processes and the number of links in a minimum source-receiver cut. Our results include a lower bound for arbitrary connections and an upper bound for multitransmitter multicast connections, for linear receiver-based and network-wide recovery from all single link failures. The second is a node-based formulation where the management requirement is taken as the sum over all nodes of the logarithm of the number of different behaviors for each node. We show that the minimum node-based requirement for failures of links adjacent to a single receiver is achieved with receiver-based schemes

    Network monitoring in multicast networks using network coding

    Get PDF
    In this paper we show how information contained in robust network codes can be used for passive inference of possible locations of link failures or losses in a network. For distributed randomized network coding, we bound the probability of being able to distinguish among a given set of failure events, and give some experimental results for one and two link failures in randomly generated networks. We also bound the required field size and complexity for designing a robust network code that distinguishes among a given set of failure events

    Capacity of wireless erasure networks

    Get PDF
    In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring each node to send the same signal on all outgoing channels. However, we assume there is no interference in reception. Such models are therefore appropriate for wireless networks where all information transmission is packetized and where some mechanism for interference avoidance is already built in. This paper looks at multicast problems over these networks. The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained. It turns out that the capacity region has a nice max-flow min-cut interpretation. The definition of cut-capacity in these networks incorporates the broadcast property of the wireless medium. It is further shown that linear coding at nodes in the network suffices to achieve the capacity region. Finally, the performance of different coding schemes in these networks when no side information is available to the destinations is analyzed

    Optimization Based Rate Control for Multicast with Network Coding

    Get PDF
    Recent advances in network coding have shown great potential for efficient information multicasting in communication networks, in terms of both network throughput and network management. In this paper, we address the problem of rate control at end-systems for network coding based multicast flows. We develop two adaptive rate control algorithms for the networks with given coding subgraphs and without given coding subgraphs, respectively. With random network coding, both algorithms can be implemented in a distributed manner, and work at transport layer to adjust source rates and at network layer to carry out network coding. We prove that the proposed algorithms converge to the globally optimal solutions for intrasession network coding. Some related issues are discussed, and numerical examples are provided to complement our theoretical analysis

    Combined Network and Erasure Coding Proactive Protection for Multiple Source Multicast Sessions

    Get PDF
    Due to the increase in the density and multitude of online real time applications involving group communication which require a certain degree of resiliency, proactive protection of multicast sessions in a network has become prominent. The backbone network providing such a service hence needs to be robust and resilient to failures. In this thesis, we consider the problem of providing fault tolerant operation for such multicast networks that have multiple sources and need to deliver data to pre-defined set of destinations. The data is assumed to be delivered in a highly connected environment with a lot of nodes, e,g., a dense wireless network. The advent of network coding has enabled us to look at novel ways of providing proactive protection. Our algorithm combines network and erasure coding to present a scheme which can tolerate predefined amount of failures in the paths from the sources to the destinations. This is accomplished by introducing redundancy in the data sent through the various paths. Network coding seeks to optimize the resource usage in the process. For sources that cannot meet the constraints of the scheme, protection is provided at the cost of reduced throughput

    Capacity of wireless erasure networks

    Full text link

    Edge-Cut Bounds on Network Coding Rates

    Full text link
    Active networks are network architectures with processors that are capable of executing code carried by the packets passing through them. A critical network management concern is the optimization of such networks and tight bounds on their performance serve as useful design benchmarks. A new bound on communication rates is developed that applies to network coding, which is a promising active network application that has processors transmit packets that are general functions, for example a bit-wise XOR, of selected received packets. The bound generalizes an edge-cut bound on routing rates by progressively removing edges from the network graph and checking whether certain strengthened d -separation conditions are satisfied. The bound improves on the cut-set bound and its efficacy is demonstrated by showing that routing is rate-optimal for some commonly cited examples in the networking literature.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43451/1/10922_2005_Article_9019.pd

    Energy-aware network coding circuit and system design

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 73-78).Network Coding (NC) has been shown to provide several advantages in communication networks in terms of throughput, data robustness and security. However, its applicability to networks with resource constrained nodes, like Body Area Networks (BANs), has been questioned due to its complexity requirements. Proposed NC implementations are based on high-end CPUs and GPUs, consuming hundreds of Watts, without providing enough insight about its energy requirements. As more and more mobile devices, sensors and other low power systems are used in modern communication protocols, a highly efficient and optimized implementation of NC is required. In this work, an effort is made to bridge NC theory with ultra low power applications. For this reason, an energy-scalable, low power accelerator is designed in order to explore the minimum energy requirements of NC. Based on post-layout simulation results using a TSMC 65nm process, the proposed encoder consumes 22.15 uW at 0.4V, achieving a processing throughput of 80 MB/s. These numbers reveal that NC can indeed be incorporated into resource constrained networks with battery-operated or even energy scavenging nodes. Apart from the hardware design, a new partial packet recovery mechanism based on NC, called PPRNC, is proposed. PPRNC exploits information contained in partial packets, similarly to existing Hybrid-ARQ schemes, but with a PHY-agnostic approach. Minimization of the number of retransmitted packets saves transmission energy and results in higher total network throughput, making PPRNC an attractive candidate for energy constrained networks, such as BANs, as well as modern, high-speed wireless mesh networks. The proposed mechanism is analyzed and implemented using commercial development boards, validating its ability to extract information contained from partial packets.by Georgios Angelopoulos.S.M

    Improving Network Reliability: Analysis, Methodology, and Algorithms

    Get PDF
    The reliability of networking and communication systems is vital for the nation's economy and security. Optical and cellular networks have become a critical infrastructure and are indispensable in emergency situations. This dissertation outlines methods for analyzing such infrastructures in the presence of catastrophic failures, such as a hurricane, as well as accidental failures of one or more components. Additionally, it presents a method for protecting against the loss of a single link in a multicast network along with a technique that enables wireless clients to efficiently recover lost data sent by their source through collaborative information exchange. Analysis of a network's reliability during a natural disaster can be assessed by simulating the conditions in which it is expected to perform. This dissertation conducts the analysis of a cellular infrastructure in the aftermath of a hurricane through Monte-Carlo sampling and presents alternative topologies which reduce resulting loss of calls. While previous research on restoration mechanisms for large-scale networks has mostly focused on handling the failures of single network elements, this dissertation examines the sampling methods used for simulating multiple failures. We present a quick method of nding a lower bound on a network's data loss through enumeration of possible cuts as well as an efficient method of nding a tighter lower bound through genetic algorithms leveraging the niching technique. Mitigation of data losses in a multicast network can be achieved by adding redundancy and employing advanced coding techniques. By using Maximum Rank Distance (MRD) codes at the source, a provider can create a parity packet which is e ectively linearly independent from the source packets such that all packets may be transmitted through the network using the network coding technique. This allows all sinks to recover all of the original data even with the failure of an edge within the network. Furthermore, this dissertation presents a method that allows a group of wireless clients to cooperatively recover from erasures (e.g., due to failures) by using the index coding techniques
    corecore