3,059 research outputs found

    Distortion Exponent in MIMO Fading Channels with Time-Varying Source Side Information

    Full text link
    Transmission of a Gaussian source over a time-varying multiple-input multiple-output (MIMO) channel is studied under strict delay constraints. Availability of a correlated side information at the receiver is assumed, whose quality, i.e., correlation with the source signal, also varies over time. A block-fading model is considered for the states of the time-varying channel and the time-varying side information; and perfect state information at the receiver is assumed, while the transmitter knows only the statistics. The high SNR performance, characterized by the \textit{distortion exponent}, is studied for this joint source-channel coding problem. An upper bound is derived and compared with lowers based on list decoding, hybrid digital-analog transmission, as well as multi-layer schemes which transmit successive refinements of the source, relying on progressive and superposed transmission with list decoding. The optimal distortion exponent is characterized for the single-input multiple-output (SIMO) and multiple-input single-output (MISO) scenarios by showing that the distortion exponent achieved by multi-layer superpositon encoding with joint decoding meets the proposed upper bound. In the MIMO scenario, the optimal distortion exponent is characterized in the low bandwidth ratio regime, and it is shown that the multi-layer superposition encoding performs very close to the upper bound in the high bandwidth expansion regime.Comment: Submitted to IEEE Transactions on Information Theor

    Error exponents of typical random codes

    Full text link
    We define the error exponent of the typical random code as the long-block limit of the negative normalized expectation of the logarithm of the error probability of the random code, as opposed to the traditional random coding error exponent, which is the limit of the negative normalized logarithm of the expectation of the error probability. For the ensemble of uniformly randomly drawn fixed composition codes, we provide exact error exponents of typical random codes for a general discrete memoryless channel (DMC) and a wide class of (stochastic) decoders, collectively referred to as the generalized likelihood decoder (GLD). This ensemble of fixed composition codes is shown to be no worse than any other ensemble of independent codewords that are drawn under a permutation--invariant distribution (e.g., i.i.d. codewords). We also present relationships between the error exponent of the typical random code and the ordinary random coding error exponent, as well as the expurgated exponent for the GLD. Finally, we demonstrate that our analysis technique is applicable also to more general communication scenarios, such as list decoding (for fixed-size lists) as well as decoding with an erasure/list option in Forney's sense.Comment: 26 pages, submitted for publicatio

    Source-Channel Diversity for Parallel Channels

    Full text link
    We consider transmitting a source across a pair of independent, non-ergodic channels with random states (e.g., slow fading channels) so as to minimize the average distortion. The general problem is unsolved. Hence, we focus on comparing two commonly used source and channel encoding systems which correspond to exploiting diversity either at the physical layer through parallel channel coding or at the application layer through multiple description source coding. For on-off channel models, source coding diversity offers better performance. For channels with a continuous range of reception quality, we show the reverse is true. Specifically, we introduce a new figure of merit called the distortion exponent which measures how fast the average distortion decays with SNR. For continuous-state models such as additive white Gaussian noise channels with multiplicative Rayleigh fading, optimal channel coding diversity at the physical layer is more efficient than source coding diversity at the application layer in that the former achieves a better distortion exponent. Finally, we consider a third decoding architecture: multiple description encoding with a joint source-channel decoding. We show that this architecture achieves the same distortion exponent as systems with optimal channel coding diversity for continuous-state channels, and maintains the the advantages of multiple description systems for on-off channels. Thus, the multiple description system with joint decoding achieves the best performance, from among the three architectures considered, on both continuous-state and on-off channels.Comment: 48 pages, 14 figure

    Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities

    Full text link
    This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities. In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second-order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.Comment: Further comments welcom

    Centralized vs Decentralized Multi-Agent Guesswork

    Full text link
    We study a notion of guesswork, where multiple agents intend to launch a coordinated brute-force attack to find a single binary secret string, and each agent has access to side information generated through either a BEC or a BSC. The average number of trials required to find the secret string grows exponentially with the length of the string, and the rate of the growth is called the guesswork exponent. We compute the guesswork exponent for several multi-agent attacks. We show that a multi-agent attack reduces the guesswork exponent compared to a single agent, even when the agents do not exchange information to coordinate their attack, and try to individually guess the secret string using a predetermined scheme in a decentralized fashion. Further, we show that the guesswork exponent of two agents who do coordinate their attack is strictly smaller than that of any finite number of agents individually performing decentralized guesswork.Comment: Accepted at IEEE International Symposium on Information Theory (ISIT) 201
    • …
    corecore