24,846 research outputs found

    Capacity of a Simple Intercellular Signal Transduction Channel

    Full text link
    We model biochemical signal transduction, based on a ligand-receptor binding mechanism, as a discrete-time finite-state Markov channel, which we call the BIND channel. We show how to obtain the capacity of this channel, for the case of binary output, binary channel state, and arbitrary finite input alphabets. We show that the capacity-achieving input distribution is IID. Further, we show that feedback does not increase the capacity of this channel. We show how the capacity of the discrete-time channel approaches the capacity of Kabanov's Poisson channel, in the limit of short time steps and rapid ligand release.Comment: Accepted for publication in IEEE Transactions on Information Theor

    On the Capacity of the Discrete-Time Poisson Channel

    Full text link

    Information-theoretic analysis of a family of additive energy channels

    Get PDF
    This dissertation studies a new family of channel models for non-coherent com- munications, the additive energy channels. By construction, the additive en- ergy channels occupy an intermediate region between two widely used channel models: the discrete-time Gaussian channel, used to represent coherent com- munication systems operating at radio and microwave frequencies, and the discrete-time Poisson channel, which often appears in the analysis of intensity- modulated systems working at optical frequencies. The additive energy chan- nels share with the Gaussian channel the additivity between a useful signal and a noise component. However, the signal and noise components are not complex- valued quadrature amplitudes but, as in the Poisson channel, non-negative real numbers, the energy or squared modulus of the complex amplitude. The additive energy channels come in two variants, depending on whether the channel output is discrete or continuous. In the former case, the energy is a multiple of a fundamental unit, the quantum of energy, whereas in the second the value of the energy can take on any non-negative real number. For con- tinuous output the additive noise has an exponential density, as for the energy of a sample of complex Gaussian noise. For discrete, or quantized, energy the signal component is randomly distributed according to a Poisson distribution whose mean is the signal energy of the corresponding Gaussian channel; part of the total noise at the channel output is thus a signal-dependent, Poisson noise component. Moreover, the additive noise has a geometric distribution, the discrete counterpart of the exponential density. Contrary to the common engineering wisdom that not using the quadrature amplitude incurs in a signi¯cant performance penalty, it is shown in this dis- sertation that the capacity of the additive energy channels essentially coincides with that of a coherent Gaussian model under a broad set of circumstances. Moreover, common modulation and coding techniques for the Gaussian chan- nel often admit a natural extension to the additive energy channels, and their performance frequently parallels those of the Gaussian channel methods. Four information-theoretic quantities, covering both theoretical and practi- cal aspects of the reliable transmission of information, are studied: the channel capacity, the minimum energy per bit, the constrained capacity when a given digital modulation format is used, and the pairwise error probability. Of these quantities, the channel capacity sets a fundamental limit on the transmission capabilities of the channel but is sometimes di±cult to determine. The min- imum energy per bit (or its inverse, the capacity per unit cost), on the other hand, turns out to be easier to determine, and may be used to analyze the performance of systems operating at low levels of signal energy. Closer to a practical ¯gure of merit is the constrained capacity, which estimates the largest amount of information which can be transmitted by using a speci¯c digital modulation format. Its study is complemented by the computation of the pairwise error probability, an e®ective tool to estimate the performance of practical coded communication systems. Regarding the channel capacity, the capacity of the continuous additive energy channel is found to coincide with that of a Gaussian channel with iden- tical signal-to-noise ratio. Also, an upper bound |the tightest known| to the capacity of the discrete-time Poisson channel is derived. The capacity of the quantized additive energy channel is shown to have two distinct functional forms: if additive noise is dominant, the capacity is close to that of the continu- ous channel with the same energy and noise levels; when Poisson noise prevails, the capacity is similar to that of a discrete-time Poisson channel, with no ad- ditive noise. An analogy with radiation channels of an arbitrary frequency, for which the quanta of energy are photons, is presented. Additive noise is found to be dominant when frequency is low and, simultaneously, the signal-to-noise ratio lies below a threshold; the value of this threshold is well approximated by the expected number of quanta of additive noise. As for the minimum energy per nat (1 nat is log2 e bits, or about 1.4427 bits), it equals the average energy of the additive noise component for all the stud- ied channel models. A similar result was previously known to hold for two particular cases, namely the discrete-time Gaussian and Poisson channels. An extension of digital modulation methods from the Gaussian channels to the additive energy channel is presented, and their constrained capacity determined. Special attention is paid to their asymptotic form of the capacity at low and high levels of signal energy. In contrast to the behaviour in the vi Gaussian channel, arbitrary modulation formats do not achieve the minimum energy per bit at low signal energy. Analytic expressions for the constrained capacity at low signal energy levels are provided. In the high-energy limit simple pulse-energy modulations, which achieve a larger constrained capacity than their counterparts for the Gaussian channel, are presented. As a ¯nal element, the error probability of binary channel codes in the ad- ditive energy channels is studied by analyzing the pairwise error probability, the probability of wrong decision between two alternative binary codewords. Saddlepoint approximations to the pairwise error probability are given, both for binary modulation and for bit-interleaved coded modulation, a simple and e±cient method to use binary codes with non-binary modulations. The meth- ods yield new simple approximations to the error probability in the fading Gaussian channel. The error rates in the continuous additive energy channel are close to those of coherent transmission at identical signal-to-noise ratio. Constellations minimizing the pairwise error probability in the additive energy channels are presented, and their form compared to that of the constellations which maximize the constrained capacity at high signal energy levels

    Poisson noise channel with dark current: Numerical computation of the optimal input distribution

    Get PDF
    This paper considers a discrete time-Poisson noise channel which is used to model pulse-amplitude modulated optical communication with a direct-detection receiver. The goal of this paper is to obtain insights into the capacity and the structure of the capacity-achieving distribution for the channel under the amplitude constraint A\mathsf{A} and in the presence of dark current λ\lambda. Using recent theoretical progress on the structure of the capacity-achieving distribution, this paper develops a numerical algorithm, based on the gradient ascent and Blahut-Arimoto algorithms, for computing the capacity and the capacity-achieving distribution. The algorithm is used to perform extensive numerical simulations for various regimes of A\mathsf{A} and λ\lambda.Comment: Submitted to IEEE ICC 2022. This is a companion paper of: A. Dytso, L. Barletta and S. Shamai Shitz, "Properties of the Support of the Capacity-Achieving Distribution of the Amplitude-Constrained Poisson Noise Channel," in IEEE Transactions on Information Theory, vol. 67, no. 11, pp. 7050-7066, Nov. 202

    On the performance of machine-type communications networks under Markovian arrival sources

    Get PDF
    Abstract. This thesis evaluates the performance of reliability and latency in machine type communication networks, which composed of single transmitter and receiver in the presence of Rayleigh fading channel. The source’s traffic arrivals are modeled as Markovian processes namely Discrete-Time Markov process, Fluid Markov process, Discrete-Time Markov Modulated Poisson process and Continuous-Time Markov Modulated Poisson process, and delay/buffer overflow constraints are imposed. Our approach is based on the reliability and latency outage probability, where transmitter not knowing the channel condition, therefore the transmitter would be transmitting information over the fixed rate. The fixed rate transmission is modeled as a two-state Discrete-time Markov process, which identifies the reliability level of wireless transmission. Using effective bandwidth and effective capacity theories, we evaluate the trade-off between reliability-latency and identify QoS requirement. The impact of different source traffic originated from MTC devices under QoS constraints on the effective transmission rate are investigated

    A refined analysis of the Poisson channel in the high-photon-efficiency regime

    Get PDF
    We study the discrete-time Poisson channel under the constraint that its average input power (in photons per channel use) must not exceed some constant E. We consider the wideband, high-photon-efficiency extreme where E approaches zero, and where the channel's "dark current" approaches zero proportionally with E. Improving over a previously obtained first-order capacity approximation, we derive a refined approximation, which includes the exact characterization of the second-order term, as well as an asymptotic characterization of the third-order term with respect to the dark current. We also show that pulse-position modulation is nearly optimal in this regime.Comment: Revised version to appear in IEEE Transactions on Information Theor

    Identification Capacity of the Discrete-Time Poisson Channel

    Full text link
    Numerous applications in the field of molecular communications (MC) such as healthcare systems are often event-driven. The conventional Shannon capacity may not be the appropriate metric for assessing performance in such cases. We propose the identification (ID) capacity as an alternative metric. Particularly, we consider randomized identification (RI) over the discrete-time Poisson channel (DTPC), which is typically used as a model for MC systems that utilize molecule-counting receivers. In the ID paradigm, the receiver's focus is not on decoding the message sent. However, he wants to determine whether a message of particular significance to him has been sent or not. In contrast to Shannon transmission codes, the size of ID codes for a Discrete Memoryless Channel (DMC) grows doubly exponentially fast with the blocklength, if randomized encoding is used. In this paper, we derive the capacity formula for RI over the DTPC subject to some peak and average power constraints. Furthermore, we analyze the case of state-dependent DTPC
    corecore