17 research outputs found

    Finite-State Channels with Feedback and State Known at the Encoder

    Full text link
    We consider finite state channels (FSCs) with feedback and state information known causally at the encoder. This setting is quite general and includes: a memoryless channel with i.i.d. state (the Shannon strategy), Markovian states that include look-ahead (LA) access to the state and energy harvesting. We characterize the feedback capacity of the general setting as the directed information between auxiliary random variables with memory to the channel outputs. We also propose two methods for computing the feedback capacity: (i) formulating an infinite-horizon average-reward dynamic program; and (ii) a single-letter lower bound based on auxiliary directed graphs called QQ-graphs. We demonstrate our computation methods on several examples. In the first example, we introduce a channel with LA and derive a closed-form, analytic lower bound on its feedback capacity. Furthermore, we show that the mentioned methods achieve the feedback capacity of known unifilar FSCs such as the trapdoor channel, the Ising channel and the input-constrained erasure channel. Finally, we analyze the feedback capacity of a channel whose state is stochastically dependent on the input.Comment: 39 pages, 10 figures. The material in this paper was presented in part at the 56th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, October 2018, and at the IEEE International Symposium on Information Theory, Los Angeles, CA, USA, June 202

    Upper Bounds on the Capacities of Noncontrollable Finite-State Channels with/without Feedback

    Full text link
    Noncontrollable finite-state channels (FSCs) are FSCs in which the channel inputs have no influence on the channel states, i.e., the channel states evolve freely. Since single-letter formulae for the channel capacities are rarely available for general noncontrollable FSCs, computable bounds are usually utilized to numerically bound the capacities. In this paper, we take the delayed channel state as part of the channel input and then define the {\em directed information rate} from the new channel input (including the source and the delayed channel state) sequence to the channel output sequence. With this technique, we derive a series of upper bounds on the capacities of noncontrollable FSCs with/without feedback. These upper bounds can be achieved by conditional Markov sources and computed by solving an average reward per stage stochastic control problem (ARSCP) with a compact state space and a compact action space. By showing that the ARSCP has a uniformly continuous reward function, we transform the original ARSCP into a finite-state and finite-action ARSCP that can be solved by a value iteration method. Under a mild assumption, the value iteration algorithm is convergent and delivers a near-optimal stationary policy and a numerical upper bound.Comment: 15 pages, Two columns, 6 figures; appears in IEEE Transaction on Information Theor

    Computable Lower Bounds for Capacities of Input-Driven Finite-State Channels

    Full text link
    This paper studies the capacities of input-driven finite-state channels, i.e., channels whose current state is a time-invariant deterministic function of the previous state and the current input. We lower bound the capacity of such a channel using a dynamic programming formulation of a bound on the maximum reverse directed information rate. We show that the dynamic programming-based bounds can be simplified by solving the corresponding Bellman equation explicitly. In particular, we provide analytical lower bounds on the capacities of (d,k)(d, k)-runlength-limited input-constrained binary symmetric and binary erasure channels. Furthermore, we provide a single-letter lower bound based on a class of input distributions with memory.Comment: 9 pages, 8 figures, submitted to International Symposium on Information Theory, 202

    Above and Beyond the Landauer Bound: Thermodynamics of Modularity

    Get PDF
    Information processing typically occurs via the composition of modular units, such as universal logic gates. The benefit of modular information processing, in contrast to globally integrated information processing, is that complex global computations are more easily and flexibly implemented via a series of simpler, localized information processing operations which only control and change local degrees of freedom. We show that, despite these benefits, there are unavoidable thermodynamic costs to modularity---costs that arise directly from the operation of localized processing and that go beyond Landauer's dissipation bound for erasing information. Integrated computations can achieve Landauer's bound, however, when they globally coordinate the control of all of an information reservoir's degrees of freedom. Unfortunately, global correlations among the information-bearing degrees of freedom are easily lost by modular implementations. This is costly since such correlations are a thermodynamic fuel. We quantify the minimum irretrievable dissipation of modular computations in terms of the difference between the change in global nonequilibrium free energy, which captures these global correlations, and the local (marginal) change in nonequilibrium free energy, which bounds modular work production. This modularity dissipation is proportional to the amount of additional work required to perform the computational task modularly. It has immediate consequences for physically embedded transducers, known as information ratchets. We show how to circumvent modularity dissipation by designing internal ratchet states that capture the global correlations and patterns in the ratchet's information reservoir. Designed in this way, information ratchets match the optimum thermodynamic efficiency of globally integrated computations.Comment: 17 pages, 9 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/idolip.ht

    Computational Mechanics of Input-Output Processes: Structured transformations and the \epsilon-transducer

    Full text link
    Computational mechanics quantifies structure in a stochastic process via its causal states, leading to the process's minimal, optimal predictor---the \epsilon-machine. We extend computational mechanics to communication channels between two processes, obtaining an analogous optimal model---the \epsilon-transducer---of the stochastic mapping between them. Here, we lay the foundation of a structural analysis of communication channels, treating joint processes and processes with input. The result is a principled structural analysis of mechanisms that support information flow between processes. It is the first in a series on the structural information theory of memoryful channels, channel composition, and allied conditional information measures.Comment: 30 pages, 19 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/et1.htm; Updated to conform to published version plus additional corrections and update
    corecore