2,082 research outputs found

    Capacity Approximation of Continuous Channels by Discrete Inputs

    Get PDF
    International audienceIn this paper, discrete approximations of the capacity are introduced where the input distribution is constrained to be discrete in addition to any other constraints on the input. For point-to-point memoryless additive noise channels, rates of convergence to the capacity of the original channel are established for a wide range of channels for which the capacity is finite. These results are obtained by viewing discrete approximations as a capacity sensitivity problem, where capacity losses are studied when there are perturbations in any of the parameters describing the channel. In particular, it is shown that the discrete approximation converges arbitrarily close to the channel capacity at rate O(∆), where ∆ is the discretization level of the approximation. Examples of channels where this rate of convergence holds are also given, including additive Cauchy and inverse Gaussian noise channels

    Information capacity in the weak-signal approximation

    Full text link
    We derive an approximate expression for mutual information in a broad class of discrete-time stationary channels with continuous input, under the constraint of vanishing input amplitude or power. The approximation describes the input by its covariance matrix, while the channel properties are described by the Fisher information matrix. This separation of input and channel properties allows us to analyze the optimality conditions in a convenient way. We show that input correlations in memoryless channels do not affect channel capacity since their effect decreases fast with vanishing input amplitude or power. On the other hand, for channels with memory, properly matching the input covariances to the dependence structure of the noise may lead to almost noiseless information transfer, even for intermediate values of the noise correlations. Since many model systems described in mathematical neuroscience and biophysics operate in the high noise regime and weak-signal conditions, we believe, that the described results are of potential interest also to researchers in these areas.Comment: 11 pages, 4 figures; accepted for publication in Physical Review

    Posterior Matching Scheme for Gaussian Multiple Access Channel with Feedback

    Full text link
    Posterior matching is a method proposed by Ofer Shayevitz and Meir Feder to design capacity achieving coding schemes for general point-to-point memoryless channels with feedback. In this paper, we present a way to extend posterior matching based encoding and variable rate decoding ideas for the Gaussian MAC with feedback, referred to as time-varying posterior matching scheme, analyze the achievable rate region and error probabilities of the extended encoding-decoding scheme. The time-varying posterior matching scheme is a generalization of the Shayevitz and Feder's posterior matching scheme when the posterior distributions of the input messages given output are not fixed over transmission time slots. It turns out that the well-known Ozarow's encoding scheme, which obtains the capacity of two-user Gaussian channel, is a special case of our extended posterior matching framework as the Schalkwijk-Kailath's scheme is a special case of the point-to-point posterior matching mentioned above. Furthermore, our designed posterior matching also obtains the linear-feedback sum-capacity for the symmetric multiuser Gaussian MAC. Besides, the encoding scheme in this paper is designed for the real Gaussian MAC to obtain that performance, which is different from previous approaches where encoding schemes are designed for the complex Gaussian MAC. More importantly, this paper shows potential of posterior matching in designing optimal coding schemes for multiuser channels with feedback.Comment: submitted to the IEEE Transactions on Information Theory. A shorter version has been accepted to IEEE Information Theory Workshop 201

    Optimal Feedback Communication via Posterior Matching

    Full text link
    In this paper we introduce a fundamental principle for optimal communication over general memoryless channels in the presence of noiseless feedback, termed posterior matching. Using this principle, we devise a (simple, sequential) generic feedback transmission scheme suitable for a large class of memoryless channels and input distributions, achieving any rate below the corresponding mutual information. This provides a unified framework for optimal feedback communication in which the Horstein scheme (BSC) and the Schalkwijk-Kailath scheme (AWGN channel) are special cases. Thus, as a corollary, we prove that the Horstein scheme indeed attains the BSC capacity, settling a longstanding conjecture. We further provide closed form expressions for the error probability of the scheme over a range of rates, and derive the achievable rates in a mismatch setting where the scheme is designed according to the wrong channel model. Several illustrative examples of the posterior matching scheme for specific channels are given, and the corresponding error probability expressions are evaluated. The proof techniques employed utilize novel relations between information rates and contraction properties of iterated function systems.Comment: IEEE Transactions on Information Theor
    • …
    corecore