1,453 research outputs found

    Kinetic model based on molecular mechanism for action potential

    Get PDF
    The Hodgkin-Huxley model for action potentials has been widely used but was not built on a microscopic description of the neuronal membrane. Through molecular dynamics simulations, the molecular mechanism of the channel currents is becoming clear. However, the quantitative link between molecular mechanism and action potential remains to be elucidated. Here, a kinetic model for action potential based on the molecular mechanism of the channel currents is proposed. Using it, the experimental observations about action potential are reproduced quantitatively and explained based on molecular mechanism. We find that the accumulation of Na+ ions near exit of the electivity filter is the dominant event to cause the refractory period of the Na+ channel and the types of the channel currents depend on its rate constants. The channel inductance represents the inertia of the channel to retain a certain ion binding state, the channel resistances include ones against state transition and charge transfer

    Non-symmetric Jacobi and Wilson type polynomials

    Full text link
    Consider a root system of type BC1BC_1 on the real line R\mathbb R with general positive multiplicities. The Cherednik-Opdam transform defines a unitary operator from an L2L^2-space on R\mathbb R to a L2L^2-space of C2\mathbb C^2-valued functions on R+\mathbb R^+ with the Harish-Chandra measure |c(\lam)|^{-2}d\lam. By introducing a weight function of the form \cosh^{-\sig}(t)\tanh^{2k} t on R\mathbb R we find an orthogonal basis for the L2L^2-space on R\mathbb R consisting of even and odd functions expressed in terms of the Jacobi polynomials (for each fixed \sig and kk). We find a Rodrigues type formula for the functions in terms of the Cherednik operator. We compute explicitly their Cherednik-Opdam transforms. We discover thus a new family of C2\mathbb C^2-valued orthogonal polynomials. In the special case when k=0k=0 the even polynomials become Wilson polynomials, and the corresponding result was proved earlier by Koornwinder

    A Coordinate System for Gaussian Networks

    Get PDF
    This paper studies network information theory problems where the external noise is Gaussian distributed. In particular, the Gaussian broadcast channel with coherent fading and the Gaussian interference channel are investigated. It is shown that in these problems, non-Gaussian code ensembles can achieve higher rates than the Gaussian ones. It is also shown that the strong Shamai-Laroia conjecture on the Gaussian ISI channel does not hold. In order to analyze non-Gaussian code ensembles over Gaussian networks, a geometrical tool using the Hermite polynomials is proposed. This tool provides a coordinate system to analyze a class of non-Gaussian input distributions that are invariant over Gaussian networks

    Achievability of Nonlinear Degrees of Freedom in Correlatively Changing Fading Channels

    Get PDF
    A new approach toward the noncoherent communications over the time varying fading channels is presented. In this approach, the relationship between the input signal space and the output signal space of a correlatively changing fading channel is shown to be a nonlinear mapping between manifolds of different dimensions. Studying this mapping, it is shown that using nonlinear decoding algorithms for single input-multiple output (SIMO) and multiple input multiple output (MIMO) systems, extra numbers of degrees of freedom (DOF) are available. We call them the nonlinear degrees of freedom

    Writing on Fading Paper and Causal Transmitter CSI

    Full text link
    A wideband fading channel is considered with causal channel state information (CSI) at the transmitter and no receiver CSI. A simple orthogonal code with energy detection rule at the receiver (similar to [6]) is shown to achieve the capacity of this channel in the limit of large bandwidth. This code transmits energy only when the channel gain is large enough. In this limit, this capacity without any receiver CSI is the same as the capacity with full receiver CSI--a phenomenon also true for dirty paper coding. For Rayleigh fading, this capacity (per unit time) is proportional to the logarithm of the bandwidth. Our coding scheme is motivated from the Gel'fand-Pinsker [2,3] coding and dirty paper coding [4]. Nonetheless, for our case, only causal CSI is required at the transmitter in contrast with dirty-paper coding and Gel'fand-Pinsker coding, where non-causal CSI is required. Then we consider a general discrete channel with i.i.d. states. Each input has an associated cost and a zero cost input "0" exists. The channel state is assumed be to be known at the transmitter in a causal manner. Capacity per unit cost is found for this channel and a simple orthogonal code is shown to achieve this capacity. Later, a novel orthogonal coding scheme is proposed for the case of causal transmitter CSI and a condition for equivalence of capacity per unit cost for causal and non-causal transmitter CSI is derived. Finally, some connections are made to the case of non-causal transmitter CSI in [8]

    The Linear Information Coupling Problems

    Full text link
    Many network information theory problems face the similar difficulty of single-letterization. We argue that this is due to the lack of a geometric structure on the space of probability distribution. In this paper, we develop such a structure by assuming that the distributions of interest are close to each other. Under this assumption, the K-L divergence is reduced to the squared Euclidean metric in an Euclidean space. In addition, we construct the notion of coordinate and inner product, which will facilitate solving communication problems. We will present the application of this approach to the point-to-point channel, general broadcast channel, and the multiple access channel (MAC) with the common source. It can be shown that with this approach, information theory problems, such as the single-letterization, can be reduced to some linear algebra problems. Moreover, we show that for the general broadcast channel, transmitting the common message to receivers can be formulated as the trade-off between linear systems. We also provide an example to visualize this trade-off in a geometric way. Finally, for the MAC with the common source, we observe a coherent combining gain due to the cooperation between transmitters, and this gain can be quantified by applying our technique.Comment: 27 pages, submitted to IEEE Transactions on Information Theor

    Linear Information Coupling Problems

    Get PDF
    Many network information theory problems face the similar difficulty of single letterization. We argue that this is due to the lack of a geometric structure on the space of probability distribution. In this paper, we develop such a structure by assuming that the distributions of interest are close to each other. Under this assumption, the K-L divergence is reduced to the squared Euclidean metric in an Euclidean space. Moreover, we construct the notion of coordinate and inner product, which will facilitate solving communication problems. We will also present the application of this approach to the point-to-point channel and the general broadcast channel, which demonstrates how our technique simplifies information theory problems.Comment: To appear, IEEE International Symposium on Information Theory, July, 201

    Fundamental Limits of Communication with Low Probability of Detection

    Full text link
    This paper considers the problem of communication over a discrete memoryless channel (DMC) or an additive white Gaussian noise (AWGN) channel subject to the constraint that the probability that an adversary who observes the channel outputs can detect the communication is low. Specifically, the relative entropy between the output distributions when a codeword is transmitted and when no input is provided to the channel must be sufficiently small. For a DMC whose output distribution induced by the "off" input symbol is not a mixture of the output distributions induced by other input symbols, it is shown that the maximum amount of information that can be transmitted under this criterion scales like the square root of the blocklength. The same is true for the AWGN channel. Exact expressions for the scaling constant are also derived.Comment: Version to appear in IEEE Transactions on Information Theory; minor typos in v2 corrected. Part of this work was presented at ISIT 2015 in Hong Kon
    • …
    corecore