365,974 research outputs found

    On the utility of network coding in dynamic environments

    Get PDF
    Many wireless applications, such as ad-hoc networks and sensor networks, require decentralized operation in dynamically varying environments. We consider a distributed randomized network coding approach that enables efficient decentralized operation of multi-source multicast networks. We show that this approach provides substantial benefits over traditional routing methods in dynamically varying environments. We present a set of empirical trials measuring the performance of network coding versus an approximate online Steiner tree routing approach when connections vary dynamically. The results show that network coding achieves superior performance in a significant fraction of our randomly generated network examples. Such dynamic settings represent a substantially broader class of networking problems than previously recognized for which network coding shows promise of significant practical benefits compared to routing

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively

    Coded Caching for Delay-Sensitive Content

    Full text link
    Coded caching is a recently proposed technique that achieves significant performance gains for cache networks compared to uncoded caching schemes. However, this substantial coding gain is attained at the cost of large delivery delay, which is not tolerable in delay-sensitive applications such as video streaming. In this paper, we identify and investigate the tradeoff between the performance gain of coded caching and the delivery delay. We propose a computationally efficient caching algorithm that provides the gains of coding and respects delay constraints. The proposed algorithm achieves the optimum performance for large delay, but still offers major gains for small delay. These gains are demonstrated in a practical setting with a video-streaming prototype.Comment: 9 page

    A unary error correction code for the near-capacity joint source and channel coding of symbol values from an infinite set

    No full text
    A novel Joint Source and Channel Code (JSCC) is proposed, which we refer to as the Unary Error Correction (UEC) code. Unlike existing JSCCs, our UEC facilitates the practical encoding of symbol values that are selected from a set having an infinite cardinality. Conventionally, these symbols are conveyed using Separate Source and Channel Codes (SSCCs), but we demonstrate that the residual redundancy that is retained following source coding results in a capacity loss, which is found to have a value of 1.11 dB in a particular practical scenario. By contrast, the proposed UEC code can eliminate this capacity loss, or reduce it to an infinitesimally small value. Furthermore, the UEC code has only a moderate complexity, facilitating its employment in practical low-complexity applications

    Network Codes for Real-Time Applications

    Get PDF
    We consider the scenario of broadcasting for real-time applications and loss recovery via instantly decodable network coding. Past work focused on minimizing the completion delay, which is not the right objective for real-time applications that have strict deadlines. In this work, we are interested in finding a code that is instantly decodable by the maximum number of users. First, we prove that this problem is NP-Hard in the general case. Then we consider the practical probabilistic scenario, where users have i.i.d. loss probability and the number of packets is linear or polynomial in the number of users. In this scenario, we provide a polynomial-time (in the number of users) algorithm that finds the optimal coded packet. The proposed algorithm is evaluated using both simulation and real network traces of a real-time Android application. Both results show that the proposed coding scheme significantly outperforms the state-of-the-art baselines: an optimal repetition code and a COPE-like greedy scheme.Comment: ToN 2013 Submission Versio

    Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding

    Full text link
    Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback--Leibler divergence and R\'{e}nyi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.Comment: 31 pages, 6 figure
    corecore