243 research outputs found

    On row-by-row coding for 2-D constraints

    Full text link
    A constant-rate encoder--decoder pair is presented for a fairly large family of two-dimensional (2-D) constraints. Encoding and decoding is done in a row-by-row manner, and is sliding-block decodable. Essentially, the 2-D constraint is turned into a set of independent and relatively simple one-dimensional (1-D) constraints; this is done by dividing the array into fixed-width vertical strips. Each row in the strip is seen as a symbol, and a graph presentation of the respective 1-D constraint is constructed. The maxentropic stationary Markov chain on this graph is next considered: a perturbed version of the corresponding probability distribution on the edges of the graph is used in order to build an encoder which operates in parallel on the strips. This perturbation is found by means of a network flow, with upper and lower bounds on the flow through the edges. A key part of the encoder is an enumerative coder for constant-weight binary words. A fast realization of this coder is shown, using floating-point arithmetic

    Modulation codes

    Get PDF

    Algorithms for sliding block codes - An application of symbolic dynamics to information theory

    Full text link

    Optimal block-type-decodable encoders for constrained systems

    Full text link

    Finite-State Channels with Feedback and State Known at the Encoder

    Full text link
    We consider finite state channels (FSCs) with feedback and state information known causally at the encoder. This setting is quite general and includes: a memoryless channel with i.i.d. state (the Shannon strategy), Markovian states that include look-ahead (LA) access to the state and energy harvesting. We characterize the feedback capacity of the general setting as the directed information between auxiliary random variables with memory to the channel outputs. We also propose two methods for computing the feedback capacity: (i) formulating an infinite-horizon average-reward dynamic program; and (ii) a single-letter lower bound based on auxiliary directed graphs called QQ-graphs. We demonstrate our computation methods on several examples. In the first example, we introduce a channel with LA and derive a closed-form, analytic lower bound on its feedback capacity. Furthermore, we show that the mentioned methods achieve the feedback capacity of known unifilar FSCs such as the trapdoor channel, the Ising channel and the input-constrained erasure channel. Finally, we analyze the feedback capacity of a channel whose state is stochastically dependent on the input.Comment: 39 pages, 10 figures. The material in this paper was presented in part at the 56th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, October 2018, and at the IEEE International Symposium on Information Theory, Los Angeles, CA, USA, June 202

    Information Nonanticipative Rate Distortion Function and Its Applications

    Full text link
    This paper investigates applications of nonanticipative Rate Distortion Function (RDF) in a) zero-delay Joint Source-Channel Coding (JSCC) design based on average and excess distortion probability, b) in bounding the Optimal Performance Theoretically Attainable (OPTA) by noncausal and causal codes, and computing the Rate Loss (RL) of zero-delay and causal codes with respect to noncausal codes. These applications are described using two running examples, the Binary Symmetric Markov Source with parameter p, (BSMS(p)) and the multidimensional partially observed Gaussian-Markov source. For the multidimensional Gaussian-Markov source with square error distortion, the solution of the nonanticipative RDF is derived, its operational meaning using JSCC design via a noisy coding theorem is shown by providing the optimal encoding-decoding scheme over a vector Gaussian channel, and the RL of causal and zero-delay codes with respect to noncausal codes is computed. For the BSMS(p) with Hamming distortion, the solution of the nonanticipative RDF is derived, the RL of causal codes with respect to noncausal codes is computed, and an uncoded noisy coding theorem based on excess distortion probability is shown. The information nonanticipative RDF is shown to be equivalent to the nonanticipatory epsilon-entropy, which corresponds to the classical RDF with an additional causality or nonanticipative condition imposed on the optimal reproduction conditional distribution.Comment: 34 pages, 12 figures, part of this paper was accepted for publication in IEEE International Symposium on Information Theory (ISIT), 2014 and in book Coordination Control of Distributed Systems of series Lecture Notes in Control and Information Sciences, 201

    Efficient Online Processing with Deep Neural Networks

    Full text link
    The capabilities and adoption of deep neural networks (DNNs) grow at an exhilarating pace: Vision models accurately classify human actions in videos and identify cancerous tissue in medical scans as precisely than human experts; large language models answer wide-ranging questions, generate code, and write prose, becoming the topic of everyday dinner-table conversations. Even though their uses are exhilarating, the continually increasing model sizes and computational complexities have a dark side. The economic cost and negative environmental externalities of training and serving models is in evident disharmony with financial viability and climate action goals. Instead of pursuing yet another increase in predictive performance, this dissertation is dedicated to the improvement of neural network efficiency. Specifically, a core contribution addresses the efficiency aspects during online inference. Here, the concept of Continual Inference Networks (CINs) is proposed and explored across four publications. CINs extend prior state-of-the-art methods developed for offline processing of spatio-temporal data and reuse their pre-trained weights, improving their online processing efficiency by an order of magnitude. These advances are attained through a bottom-up computational reorganization and judicious architectural modifications. The benefit to online inference is demonstrated by reformulating several widely used network architectures into CINs, including 3D CNNs, ST-GCNs, and Transformer Encoders. An orthogonal contribution tackles the concurrent adaptation and computational acceleration of a large source model into multiple lightweight derived models. Drawing on fusible adapter networks and structured pruning, Structured Pruning Adapters achieve superior predictive accuracy under aggressive pruning using significantly fewer learned weights compared to fine-tuning with pruning.Comment: PhD Dissertatio
    corecore