5,957 research outputs found

    Joint Source-Channel Codes for MIMO Block Fading Channels

    Full text link
    We consider transmission of a continuous amplitude source over an L-block Rayleigh fading Mt×MrM_t \times M_r MIMO channel when the channel state information is only available at the receiver. Since the channel is not ergodic, Shannon's source-channel separation theorem becomes obsolete and the optimal performance requires a joint source -channel approach. Our goal is to minimize the expected end-to-end distortion, particularly in the high SNR regime. The figure of merit is the distortion exponent, defined as the exponential decay rate of the expected distortion with increasing SNR. We provide an upper bound and lower bounds for the distortion exponent with respect to the bandwidth ratio among the channel and source bandwidths. For the lower bounds, we analyze three different strategies based on layered source coding concatenated with progressive, superposition or hybrid digital/analog transmission. In each case, by adjusting the system parameters we optimize the distortion exponent as a function of the bandwidth ratio. We prove that the distortion exponent upper bound can be achieved when the channel has only one degree of freedom, that is L=1, and min{Mt,Mr}=1\min\{M_t,M_r\}=1. When we have more degrees of freedom, our achievable distortion exponents meet the upper bound for only certain ranges of the bandwidth ratio. We demonstrate that our results, which were derived for a complex Gaussian source, can be extended to more general source distributions as well.Comment: 36 pages, 11 figure

    Joint Source-Channel Coding with Time-Varying Channel and Side-Information

    Full text link
    Transmission of a Gaussian source over a time-varying Gaussian channel is studied in the presence of time-varying correlated side information at the receiver. A block fading model is considered for both the channel and the side information, whose states are assumed to be known only at the receiver. The optimality of separate source and channel coding in terms of average end-to-end distortion is shown when the channel is static while the side information state follows a discrete or a continuous and quasiconcave distribution. When both the channel and side information states are time-varying, separate source and channel coding is suboptimal in general. A partially informed encoder lower bound is studied by providing the channel state information to the encoder. Several achievable transmission schemes are proposed based on uncoded transmission, separate source and channel coding, joint decoding as well as hybrid digital-analog transmission. Uncoded transmission is shown to be optimal for a class of continuous and quasiconcave side information state distributions, while the channel gain may have an arbitrary distribution. To the best of our knowledge, this is the first example in which the uncoded transmission achieves the optimal performance thanks to the time-varying nature of the states, while it is suboptimal in the static version of the same problem. Then, the optimal \emph{distortion exponent}, that quantifies the exponential decay rate of the expected distortion in the high SNR regime, is characterized for Nakagami distributed channel and side information states, and it is shown to be achieved by hybrid digital-analog and joint decoding schemes in certain cases, illustrating the suboptimality of pure digital or analog transmission in general.Comment: Submitted to IEEE Transactions on Information Theor

    Source Broadcasting to the Masses: Separation has a Bounded Loss

    Full text link
    This work discusses the source broadcasting problem, i.e. transmitting a source to many receivers via a broadcast channel. The optimal rate-distortion region for this problem is unknown. The separation approach divides the problem into two complementary problems: source successive refinement and broadcast channel transmission. We provide bounds on the loss incorporated by applying time-sharing and separation in source broadcasting. If the broadcast channel is degraded, it turns out that separation-based time-sharing achieves at least a factor of the joint source-channel optimal rate, and this factor has a positive limit even if the number of receivers increases to infinity. For the AWGN broadcast channel a better bound is introduced, implying that all achievable joint source-channel schemes have a rate within one bit of the separation-based achievable rate region for two receivers, or within log2T\log_2 T bits for TT receivers

    On the Design of a Novel Joint Network-Channel Coding Scheme for the Multiple Access Relay Channel

    Full text link
    This paper proposes a novel joint non-binary network-channel code for the Time-Division Decode-and-Forward Multiple Access Relay Channel (TD-DF-MARC), where the relay linearly combines -- over a non-binary finite field -- the coded sequences from the source nodes. A method based on an EXIT chart analysis is derived for selecting the best coefficients of the linear combination. Moreover, it is shown that for different setups of the system, different coefficients should be chosen in order to improve the performance. This conclusion contrasts with previous works where a random selection was considered. Monte Carlo simulations show that the proposed scheme outperforms, in terms of its gap to the outage probabilities, the previously published joint network-channel coding approaches. Besides, this gain is achieved by using very short-length codewords, which makes the scheme particularly attractive for low-latency applications.Comment: 28 pages, 9 figures; Submitted to IEEE Journal on Selected Areas in Communications - Special Issue on Theories and Methods for Advanced Wireless Relays, 201

    Source-Channel Diversity for Parallel Channels

    Full text link
    We consider transmitting a source across a pair of independent, non-ergodic channels with random states (e.g., slow fading channels) so as to minimize the average distortion. The general problem is unsolved. Hence, we focus on comparing two commonly used source and channel encoding systems which correspond to exploiting diversity either at the physical layer through parallel channel coding or at the application layer through multiple description source coding. For on-off channel models, source coding diversity offers better performance. For channels with a continuous range of reception quality, we show the reverse is true. Specifically, we introduce a new figure of merit called the distortion exponent which measures how fast the average distortion decays with SNR. For continuous-state models such as additive white Gaussian noise channels with multiplicative Rayleigh fading, optimal channel coding diversity at the physical layer is more efficient than source coding diversity at the application layer in that the former achieves a better distortion exponent. Finally, we consider a third decoding architecture: multiple description encoding with a joint source-channel decoding. We show that this architecture achieves the same distortion exponent as systems with optimal channel coding diversity for continuous-state channels, and maintains the the advantages of multiple description systems for on-off channels. Thus, the multiple description system with joint decoding achieves the best performance, from among the three architectures considered, on both continuous-state and on-off channels.Comment: 48 pages, 14 figure

    Asymptotic Estimates in Information Theory with Non-Vanishing Error Probabilities

    Full text link
    This monograph presents a unified treatment of single- and multi-user problems in Shannon's information theory where we depart from the requirement that the error probability decays asymptotically in the blocklength. Instead, the error probabilities for various problems are bounded above by a non-vanishing constant and the spotlight is shone on achievable coding rates as functions of the growing blocklengths. This represents the study of asymptotic estimates with non-vanishing error probabilities. In Part I, after reviewing the fundamentals of information theory, we discuss Strassen's seminal result for binary hypothesis testing where the type-I error probability is non-vanishing and the rate of decay of the type-II error probability with growing number of independent observations is characterized. In Part II, we use this basic hypothesis testing result to develop second- and sometimes, even third-order asymptotic expansions for point-to-point communication. Finally in Part III, we consider network information theory problems for which the second-order asymptotics are known. These problems include some classes of channels with random state, the multiple-encoder distributed lossless source coding (Slepian-Wolf) problem and special cases of the Gaussian interference and multiple-access channels. Finally, we discuss avenues for further research.Comment: Further comments welcom

    Reduced-Dimension Linear Transform Coding of Correlated Signals in Networks

    Full text link
    A model, called the linear transform network (LTN), is proposed to analyze the compression and estimation of correlated signals transmitted over directed acyclic graphs (DAGs). An LTN is a DAG network with multiple source and receiver nodes. Source nodes transmit subspace projections of random correlated signals by applying reduced-dimension linear transforms. The subspace projections are linearly processed by multiple relays and routed to intended receivers. Each receiver applies a linear estimator to approximate a subset of the sources with minimum mean squared error (MSE) distortion. The model is extended to include noisy networks with power constraints on transmitters. A key task is to compute all local compression matrices and linear estimators in the network to minimize end-to-end distortion. The non-convex problem is solved iteratively within an optimization framework using constrained quadratic programs (QPs). The proposed algorithm recovers as special cases the regular and distributed Karhunen-Loeve transforms (KLTs). Cut-set lower bounds on the distortion region of multi-source, multi-receiver networks are given for linear coding based on convex relaxations. Cut-set lower bounds are also given for any coding strategy based on information theory. The distortion region and compression-estimation tradeoffs are illustrated for different communication demands (e.g. multiple unicast), and graph structures.Comment: 33 pages, 7 figures, To appear in IEEE Transactions on Signal Processin
    corecore