737 research outputs found

    Applications of Derandomization Theory in Coding

    Get PDF
    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.]Comment: EPFL Phd Thesi

    Feedback in the Non-Asymptotic Regime

    Get PDF
    Without feedback, the backoff from capacity due to non-asymptotic blocklength can be quite substantial for blocklengths and error probabilities of interest in many practical applications. In this paper, novel achievability bounds are used to demonstrate that in the non-asymptotic regime, the maximal achievable rate improves dramatically thanks to variable-length coding and feedback. For example, for the binary symmetric channel with capacity 1/2 the blocklength required to achieve 90% of the capacity is smaller than 200, compared to at least 3100 for the best fixed-blocklength code (even with noiseless feedback). Virtually all the advantages of noiseless feedback are shown to be achievable, even if the feedback link is used only to send a single signal informing the encoder to terminate the transmission (stop-feedback). It is demonstrated that the non-asymptotic behavior of the fundamental limit depends crucially on the particular model chosen for the “end-of-packet” control signal. Fixed-blocklength codes and related questions concerning communicating with a guaranteed delay are discussed, in which situation feedback is demonstrated to be almost useless even non-asymptotically.National Science Foundation (U.S.) (grant CCF-06-35154National Science Foundation (U.S.) (grant CCF-10-16625)National Science Foundation (U.S.) (grant CNS-09-05398)National Science Foundation (U.S.) (Center for Science of Information, under Grant CCF-0939370

    A Survey on Quantum Channel Capacities

    Get PDF
    Quantum information processing exploits the quantum nature of information. It offers fundamentally new solutions in the field of computer science and extends the possibilities to a level that cannot be imagined in classical communication systems. For quantum communication channels, many new capacity definitions were developed in comparison to classical counterparts. A quantum channel can be used to realize classical information transmission or to deliver quantum information, such as quantum entanglement. Here we review the properties of the quantum communication channel, the various capacity measures and the fundamental differences between the classical and quantum channels.Comment: 58 pages, Journal-ref: IEEE Communications Surveys and Tutorials (2018) (updated & improved version of arXiv:1208.1270

    Unreliable and resource-constrained decoding

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 185-213).Traditional information theory and communication theory assume that decoders are noiseless and operate without transient or permanent faults. Decoders are also traditionally assumed to be unconstrained in physical resources like material, memory, and energy. This thesis studies how constraining reliability and resources in the decoder limits the performance of communication systems. Five communication problems are investigated. Broadly speaking these are communication using decoders that are wiring cost-limited, that are memory-limited, that are noisy, that fail catastrophically, and that simultaneously harvest information and energy. For each of these problems, fundamental trade-offs between communication system performance and reliability or resource consumption are established. For decoding repetition codes using consensus decoding circuits, the optimal tradeoff between decoding speed and quadratic wiring cost is defined and established. Designing optimal circuits is shown to be NP-complete, but is carried out for small circuit size. The natural relaxation to the integer circuit design problem is shown to be a reverse convex program. Random circuit topologies are also investigated. Uncoded transmission is investigated when a population of heterogeneous sources must be categorized due to decoder memory constraints. Quantizers that are optimal for mean Bayes risk error, a novel fidelity criterion, are designed. Human decision making in segregated populations is also studied with this framework. The ratio between the costs of false alarms and missed detections is also shown to fundamentally affect the essential nature of discrimination. The effect of noise on iterative message-passing decoders for low-density parity check (LDPC) codes is studied. Concentration of decoding performance around its average is shown to hold. Density evolution equations for noisy decoders are derived. Decoding thresholds degrade smoothly as decoder noise increases, and in certain cases, arbitrarily small final error probability is achievable despite decoder noisiness. Precise information storage capacity results for reliable memory systems constructed from unreliable components are also provided. Limits to communicating over systems that fail at random times are established. Communication with arbitrarily small probability of error is not possible, but schemes that optimize transmission volume communicated at fixed maximum message error probabilities are determined. System state feedback is shown not to improve performance. For optimal communication with decoders that simultaneously harvest information and energy, a coding theorem that establishes the fundamental trade-off between the rates at which energy and reliable information can be transmitted over a single line is proven. The capacity-power function is computed for several channels; it is non-increasing and concave.by Lav R. Varshney.Ph.D

    Local to global geometric methods in information theory

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 201-203).This thesis treats several information theoretic problems with a unified geometric approach. The development of this approach was motivated by the challenges encountered while working on these problems, and in turn, the testing of the initial tools to these problems suggested numerous refinements and improvements on the geometric methods. In ergodic probabilistic settings, Sanov's theorem gives asymptotic estimates on the probabilities of very rare events. The theorem also characterizes the exponential decay of the probabilities, as the sample size grows, and the exponential rate is given by the minimization of a certain divergence expression. In his seminal paper, A Mathematical Theory of Communication, Shannon introduced two influential ideas to simplify the complex task of evaluating the performance of a coding scheme: the asymptotic perspective (in the number of channel uses) and the random coding argument. In this setting, Sanov's theorem can be used to analyze ergodic information theoretic problems, and the performance of a coding scheme can be estimated by expressions involving the divergence. One would then like to use a geometric intuition to solve these problems, but the divergence is not a distance and our naive geometric intuition may lead to incorrect conclusions. In information geometry, a specific differential geometric structure is introduced by means of "dual affine connections". The approach we take in this thesis is slightly different and is based on introducing additional asymptotic regimes to analyze the divergence expressions. The following two properties play an important role. The divergence may not be a distance, but locally (i.e., when its arguments are "close to each other"), the divergence behaves like a squared distance.(cont.) Moreover, globally (i.e., when its arguments have no local restriction), it also preserves certain properties satisfied by squared distances. Therefore, we develop the Very Noisy and Hermite transformations, as techniques to map our global information theoretic problems in local ones. Through this localization, our global divergence expressions reduce in the limit to expressions defined in an inner product space. This provides us with a valuable geometric insight to the global problems, as well as a strong tool to find counter-examples. Finally, in certain cases, we have been able to "lift" results proven locally to results proven globally.(cont.) Therefore, we develop the Very Noisy and Hermite transformations, as techniques to map our global information theoretic problems in local ones. Through this localization, our global divergence expressions reduce in the limit to expressions defined in an inner product space. This provides us with a valuable geometric insight to the global problems, as well as a strong tool to find counter-examples. Finally, in certain cases, we have been able to "lift" results proven locally to results proven globally. We consider the following three problems. First, we address the problem of finding good linear decoders (maximizing additive metrics) for compound discrete memoryless channels. Known universal decoders are not linear and most of them heavily depend on the finite alphabet assumption. We show that by using a finite number of additive metrics, we can construct decoders that are universal (capacity achieving) on most compound sets. We then consider additive Gaussian noise channels. For a given perturbation of a Gaussian input distribution, we define an operator that measures how much variation is induced in the output entropy. We found that the singular functions of this operator are the Hermite polynomials, and the singular values are the powers of a signal to noise ratio. We show, in particular, how to use this structure on a Gaussian interference channel to characterize a regime where interference should not be treated as noise. Finally, we consider multi-input multi-output channels and discuss the properties of the optimal input distributions, for various random fading matrix ensembles. In particular, we prove Telatar's conjecture on the covariance structure minimizing the outage probability for output dimension one and input dimensions less than one hundred.by Emmanuel Auguste Abbe.Ph.D
    • …
    corecore