9,006 research outputs found

    On adversarial joint source channel coding

    Full text link
    Abstract—In a joint source-channel coding scheme, a single mapping is used to perform both the tasks of data compression and channel coding in a combined way, rather than performing them separately. Usually for simple iid sources and channels, separation of the tasks is information theoretically optimal. In an adversarial joint source-channel coding scenario, instead of a stochastic channel, an adversary introduces a set of bounded number of errors/erasures. It has been shown recently that, even in the simplest cases of such adversarial models, separation is suboptimal and characterizing the fundamental limits is difficult. In this paper, we study several properties of such adversarial joint source-channel schemes. We show optimality of separation in some situations, provide simple joint schemes that beat separation in others, and give new bounds on the rate of such coding. I

    Infomax Neural Joint Source-Channel Coding via Adversarial Bit Flip

    Full text link
    Although Shannon theory states that it is asymptotically optimal to separate the source and channel coding as two independent processes, in many practical communication scenarios this decomposition is limited by the finite bit-length and computational power for decoding. Recently, neural joint source-channel coding (NECST) is proposed to sidestep this problem. While it leverages the advancements of amortized inference and deep learning to improve the encoding and decoding process, it still cannot always achieve compelling results in terms of compression and error correction performance due to the limited robustness of its learned coding networks. In this paper, motivated by the inherent connections between neural joint source-channel coding and discrete representation learning, we propose a novel regularization method called Infomax Adversarial-Bit-Flip (IABF) to improve the stability and robustness of the neural joint source-channel coding scheme. More specifically, on the encoder side, we propose to explicitly maximize the mutual information between the codeword and data; while on the decoder side, the amortized reconstruction is regularized within an adversarial framework. Extensive experiments conducted on various real-world datasets evidence that our IABF can achieve state-of-the-art performances on both compression and error correction benchmarks and outperform the baselines by a significant margin.Comment: AAAI202

    Upper Bounds on the Capacity of Binary Channels with Causal Adversaries

    Full text link
    In this work we consider the communication of information in the presence of a causal adversarial jammer. In the setting under study, a sender wishes to communicate a message to a receiver by transmitting a codeword (x1,...,xn)(x_1,...,x_n) bit-by-bit over a communication channel. The sender and the receiver do not share common randomness. The adversarial jammer can view the transmitted bits xix_i one at a time, and can change up to a pp-fraction of them. However, the decisions of the jammer must be made in a causal manner. Namely, for each bit xix_i the jammer's decision on whether to corrupt it or not must depend only on xjx_j for jij \leq i. This is in contrast to the "classical" adversarial jamming situations in which the jammer has no knowledge of (x1,...,xn)(x_1,...,x_n), or knows (x1,...,xn)(x_1,...,x_n) completely. In this work, we present upper bounds (that hold under both the average and maximal probability of error criteria) on the capacity which hold for both deterministic and stochastic encoding schemes.Comment: To appear in the IEEE Transactions on Information Theory; shortened version appeared at ISIT 201

    Resilient Network Coding in the Presence of Byzantine Adversaries

    Get PDF
    Network coding substantially increases network throughput. But since it involves mixing of information inside the network, a single corrupted packet generated by a malicious node can end up contaminating all the information reaching a destination, preventing decoding. This paper introduces distributed polynomial-time rate-optimal network codes that work in the presence of Byzantine nodes. We present algorithms that target adversaries with different attacking capabilities. When the adversary can eavesdrop on all links and jam zO links, our first algorithm achieves a rate of C - 2zO, where C is the network capacity. In contrast, when the adversary has limited eavesdropping capabilities, we provide algorithms that achieve the higher rate of C - zO. Our algorithms attain the optimal rate given the strength of the adversary. They are information-theoretically secure. They operate in a distributed manner, assume no knowledge of the topology, and can be designed and implemented in polynomial time. Furthermore, only the source and destination need to be modified; nonmalicious nodes inside the network are oblivious to the presence of adversaries and implement a classical distributed network code. Finally, our algorithms work over wired and wireless networks

    Dynamic information and constraints in source and channel coding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 237-251).This thesis explore dynamics in source coding and channel coding. We begin by introducing the idea of distortion side information, which does not directly depend on the source but instead affects the distortion measure. Such distortion side information is not only useful at the encoder but under certain conditions knowing it at the encoder is optimal and knowing it at the decoder is useless. Thus distortion side information is a natural complement to Wyner-Ziv side information and may be useful in exploiting properties of the human perceptual system as well as in sensor or control applications. In addition to developing the theoretical limits of source coding with distortion side information, we also construct practical quantizers based on lattices and codes on graphs. Our use of codes on graphs is also of independent interest since it highlights some issues in translating the success of turbo and LDPC codes into the realm of source coding. Finally, to explore the dynamics of side information correlated with the source, we consider fixed lag side information at the decoder. We focus on the special case of perfect side information with unit lag corresponding to source coding with feedforward (the dual of channel coding with feedback).(cont.) Using duality, we develop a linear complexity algorithm which exploits the feedforward information to achieve the rate-distortion bound. The second part of the thesis focuses on channel dynamics in communication by introducing a new system model to study delay in streaming applications. We first consider an adversarial channel model where at any time the channel may suffer a burst of degraded performance (e.g., due to signal fading, interference, or congestion) and prove a coding theorem for the minimum decoding delay required to recover from such a burst. Our coding theorem illustrates the relationship between the structure of a code, the dynamics of the channel, and the resulting decoding delay. We also consider more general channel dynamics. Specifically, we prove a coding theorem establishing that, for certain collections of channel ensembles, delay-universal codes exist that simultaneously achieve the best delay for any channel in the collection. Practical constructions with low encoding and decoding complexity are described for both cases.(cont.) Finally, we also consider architectures consisting of both source and channel coding which deal with channel dynamics by spreading information over space, frequency, multiple antennas, or alternate transmission paths in a network to avoid coding delays. Specifically, we explore whether the inherent diversity in such parallel channels should be exploited at the application layer via multiple description source coding, at the physical layer via parallel channel coding, or through some combination of joint source-channel coding. For on-off channel models application layer diversity architectures achieve better performance while for channels with a continuous range of reception quality (e.g., additive Gaussian noise channels with Rayleigh fading), the reverse is true. Joint source-channel coding achieves the best of both by performing as well as application layer diversity for on-off channels and as well as physical layer diversity for continuous channels.by Emin Martinian.Ph.D

    Connecting Multiple-unicast and Network Error Correction: Reduction and Unachievability

    Full text link
    We show that solving a multiple-unicast network coding problem can be reduced to solving a single-unicast network error correction problem, where an adversary may jam at most a single edge in the network. Specifically, we present an efficient reduction that maps a multiple-unicast network coding instance to a network error correction instance while preserving feasibility. The reduction holds for both the zero probability of error model and the vanishing probability of error model. Previous reductions are restricted to the zero-error case. As an application of the reduction, we present a constructive example showing that the single-unicast network error correction capacity may not be achievable, a result of separate interest.Comment: ISIT 2015. arXiv admin note: text overlap with arXiv:1410.190
    corecore