57,047 research outputs found

    End-to-End Error-Correcting Codes on Networks with Worst-Case Symbol Errors

    Full text link
    The problem of coding for networks experiencing worst-case symbol errors is considered. We argue that this is a reasonable model for highly dynamic wireless network transmissions. We demonstrate that in this setup prior network error-correcting schemes can be arbitrarily far from achieving the optimal network throughput. A new transform metric for errors under the considered model is proposed. Using this metric, we replicate many of the classical results from coding theory. Specifically, we prove new Hamming-type, Plotkin-type, and Elias-Bassalygo-type upper bounds on the network capacity. A commensurate lower bound is shown based on Gilbert-Varshamov-type codes for error-correction. The GV codes used to attain the lower bound can be non-coherent, that is, they do not require prior knowledge of the network topology. We also propose a computationally-efficient concatenation scheme. The rate achieved by our concatenated codes is characterized by a Zyablov-type lower bound. We provide a generalized minimum-distance decoding algorithm which decodes up to half the minimum distance of the concatenated codes. The end-to-end nature of our design enables our codes to be overlaid on the classical distributed random linear network codes [1]. Furthermore, the potentially intensive computation at internal nodes for the link-by-link error-correction is un-necessary based on our design.Comment: Submitted for publication. arXiv admin note: substantial text overlap with arXiv:1108.239

    Performance analysis of queueing networks via robust optimization

    Get PDF
    Performance analysis of queueing networks is one of the most challenging areas of queueing theory. Barring very specialized models such as product-form type queueing networks, there exist very few results that provide provable nonasymptotic upper and lower bounds on key performance measures. In this paper we propose a new performance analysis method, which is based on the robust optimization. The basic premise of our approach is as follows: rather than assuming that the stochastic primitives of a queueing model satisfy certain probability laws—such as i.i.d. interarrival and service times distributions—we assume that the underlying primitives are deterministic and satisfy the implications of such probability laws. These implications take the form of simple linear constraints, namely, those motivated by the law of the iterated logarithm (LIL). Using this approach we are able to obtain performance bounds on some key performance measures. Furthermore, these performance bounds imply similar bounds in the underlying stochastic queueing models. We demonstrate our approach on two types of queueing networks: (a) tandem single-class queueing network and (b) multiclass single-server queueing network. In both cases, using the proposed robust optimization approach, we are able to obtain explicit upper bounds on some steady-state performance measures. For example, for the case of TSC system we obtain a bound of the form C(1 – {rho})–1 ln ln((1 – {rho})–1) [C(1-p) superscript -1 ln ln ((1 - p) superscript -1)]on the expected steady-state sojourn time, where C is an explicit constant and {rho} is the bottleneck traffic intensity. This qualitatively agrees with the correct heavy traffic scaling of this performance measure up to the ln ln((1 – {rho})–1) [ln ln((1 - p) superscript -1)] correction factor.National Science Foundation (U.S.) (Grant DMI-0556106)National Science Foundation (U.S.) (Grant CMMI-0726733

    Certainty Closure: Reliable Constraint Reasoning with Incomplete or Erroneous Data

    Full text link
    Constraint Programming (CP) has proved an effective paradigm to model and solve difficult combinatorial satisfaction and optimisation problems from disparate domains. Many such problems arising from the commercial world are permeated by data uncertainty. Existing CP approaches that accommodate uncertainty are less suited to uncertainty arising due to incomplete and erroneous data, because they do not build reliable models and solutions guaranteed to address the user's genuine problem as she perceives it. Other fields such as reliable computation offer combinations of models and associated methods to handle these types of uncertain data, but lack an expressive framework characterising the resolution methodology independently of the model. We present a unifying framework that extends the CP formalism in both model and solutions, to tackle ill-defined combinatorial problems with incomplete or erroneous data. The certainty closure framework brings together modelling and solving methodologies from different fields into the CP paradigm to provide reliable and efficient approches for uncertain constraint problems. We demonstrate the applicability of the framework on a case study in network diagnosis. We define resolution forms that give generic templates, and their associated operational semantics, to derive practical solution methods for reliable solutions.Comment: Revised versio

    Network Coding for Error Correction

    Get PDF
    In this thesis, network error correction is considered from both theoretical and practical viewpoints. Theoretical parameters such as network structure and type of connection (multicast vs. nonmulticast) have a profound effect on network error correction capability. This work is also dictated by the practical network issues that arise in wireless ad-hoc networks, networks with limited computational power (e.g., sensor networks) and real-time data streaming systems (e.g., video/audio conferencing or media streaming). Firstly, multicast network scenarios with probabilistic error and erasure occurrence are considered. In particular, it is shown that in networks with both random packet erasures and errors, increasing the relative occurrence of erasures compared to errors favors network coding over forwarding at network nodes, and vice versa. Also, fountain-like error-correcting codes, for which redundancy is incrementally added until decoding succeeds, are constructed. These codes are appropriate for use in scenarios where the upper bound on the number of errors is unknown a priori. Secondly, network error correction in multisource multicast and nonmulticast network scenarios is discussed. Capacity regions for multisource multicast network error correction with both known and unknown topologies (coherent and noncoherent network coding) are derived. Several approaches to lower- and upper-bounding error-correction capacity regions of general nonmulticast networks are given. For 3-layer two-sink and nested-demand nonmulticast network topologies some of the given lower and upper bounds match. For these network topologies, code constructions that employ only intrasession coding are designed. These designs can be applied to streaming erasure correction code constructions.</p

    Network error correction with unequal link capacities

    Full text link
    This paper studies the capacity of single-source single-sink noiseless networks under adversarial or arbitrary errors on no more than z edges. Unlike prior papers, which assume equal capacities on all links, arbitrary link capacities are considered. Results include new upper bounds, network error correction coding strategies, and examples of network families where our bounds are tight. An example is provided of a network where the capacity is 50% greater than the best rate that can be achieved with linear coding. While coding at the source and sink suffices in networks with equal link capacities, in networks with unequal link capacities, it is shown that intermediate nodes may have to do coding, nonlinear error detection, or error correction in order to achieve the network error correction capacity

    Erasure Correction for Noisy Radio Networks

    Get PDF
    The radio network model is a well-studied model of wireless, multi-hop networks. However, radio networks make the strong assumption that messages are delivered deterministically. The recently introduced noisy radio network model relaxes this assumption by dropping messages independently at random. In this work we quantify the relative computational power of noisy radio networks and classic radio networks. In particular, given a non-adaptive protocol for a fixed radio network we show how to reliably simulate this protocol if noise is introduced with a multiplicative cost of poly(log Delta, log log n) rounds where n is the number nodes in the network and Delta is the max degree. Moreover, we demonstrate that, even if the simulated protocol is not non-adaptive, it can be simulated with a multiplicative O(Delta log ^2 Delta) cost in the number of rounds. Lastly, we argue that simulations with a multiplicative overhead of o(log Delta) are unlikely to exist by proving that an Omega(log Delta) multiplicative round overhead is necessary under certain natural assumptions
    • …
    corecore