22 research outputs found

    Sparse graph codes for compression, sensing, and secrecy

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2010.Cataloged from student PDF version of thesis.Includes bibliographical references (p. 201-212).Sparse graph codes were first introduced by Gallager over 40 years ago. Over the last two decades, such codes have been the subject of intense research, and capacity approaching sparse graph codes with low complexity encoding and decoding algorithms have been designed for many channels. Motivated by the success of sparse graph codes for channel coding, we explore the use of sparse graph codes for four other problems related to compression, sensing, and security. First, we construct locally encodable and decodable source codes for a simple class of sources. Local encodability refers to the property that when the original source data changes slightly, the compression produced by the source code can be updated easily. Local decodability refers to the property that a single source symbol can be recovered without having to decode the entire source block. Second, we analyze a simple message-passing algorithm for compressed sensing recovery, and show that our algorithm provides a nontrivial f1/f1 guarantee. We also show that very sparse matrices and matrices whose entries must be either 0 or 1 have poor performance with respect to the restricted isometry property for the f2 norm. Third, we analyze the performance of a special class of sparse graph codes, LDPC codes, for the problem of quantizing a uniformly random bit string under Hamming distortion. We show that LDPC codes can come arbitrarily close to the rate-distortion bound using an optimal quantizer. This is a special case of a general result showing a duality between lossy source coding and channel coding-if we ignore computational complexity, then good channel codes are automatically good lossy source codes. We also prove a lower bound on the average degree of vertices in an LDPC code as a function of the gap to the rate-distortion bound. Finally, we construct efficient, capacity-achieving codes for the wiretap channel, a model of communication that allows one to provide information-theoretic, rather than computational, security guarantees. Our main results include the introduction of a new security critertion which is an information-theoretic analog of semantic security, the construction of capacity-achieving codes possessing strong security with nearly linear time encoding and decoding algorithms for any degraded wiretap channel, and the construction of capacity-achieving codes possessing semantic security with linear time encoding and decoding algorithms for erasure wiretap channels. Our analysis relies on a relatively small set of tools. One tool is density evolution, a powerful method for analyzing the behavior of message-passing algorithms on long, random sparse graph codes. Another concept we use extensively is the notion of an expander graph. Expander graphs have powerful properties that allow us to prove adversarial, rather than probabilistic, guarantees for message-passing algorithms. Expander graphs are also useful in the context of the wiretap channel because they provide a method for constructing randomness extractors. Finally, we use several well-known isoperimetric inequalities (Harper's inequality, Azuma's inequality, and the Gaussian Isoperimetric inequality) in our analysis of the duality between lossy source coding and channel coding.by Venkat Bala Chandar.Ph.D

    Victorian Noon

    Get PDF
    Originally published in 1979. Carl Dawson looks at the year 1850, which was an extraordinary year in English literary history, to study both the great and forgotten writers, to survey journals and novels, poems and magazines, and to ask questions about dominant influences and ideas. His primary aim is descriptive: How was Wordsworth's Prelude received by his contemporaries on its publication in 1850? How did reviewers respond to new tendencies in poetry and fiction/ Who were the prominent literary models? But Dawson's descriptions also lead to broader, theoretical questions about such issues as the status of the imagination in an age obsessed by mechanical invention, about the public role of the writer, the appeal to nature, and the use of myth and memory. To express the Victorians' estimation of poetry, for example, Dawson presents the contrasting views help by two eminent Victorians, Macaulay and Carlyle. In Macaulay's opinion, the advance of civilization led to the decline of poetry; Carlyle, on the other hand, saw the poet as a spiritual liberator in a world of materialists. The fusion of the poet's personal and public roles is witnessed in a discussion of the two mid-Victorian Poet Laureates, Wordsworth and his successor, Tennyson. In analyzing the relationship between the two writers' works, Dawson also highlights the extent of the Victorians' admiration for Dante. To give a wider perspective of the status of literature during this time, Dawson examines reviews, prefaces, and other remarks. Critics, he shows, made a clear distinction between poetry and fiction. Thus, in 1850, a comparison between, say, Wordsworth and Dickens would not have been made. Dawson, however, does compare the two, by focusing on their uses of autobiography. Dickens surfaces again, in a discussion of Victorian periodical publishing. Here, Dawson compares the Pre-Raphaelites' short-lived journal The Germ with Dickens' enormously popular Household Words and a radical paper, The Red Republican, which printed the first English version of "The Communist Manifesto" in 1850. In bringing together materials that have often been seen as disparate and unrelated and by suggesting new literary and ideological relationships, Carl Dawson has written a book to inform almost any reader, whether scholar of Victorian literature or lover of Dicken's novels

    On performance analysis and implementation issues of iterative decoding for graph based codes

    Get PDF
    There is no doubt that long random-like code has the potential to achieve good performance because of its excellent distance spectrum. However, these codes remain useless in practical applications due to the lack of decoders rendering good performance at an acceptable complexity. The invention of turbo code marks a milestone progress in channel coding theory in that it achieves near Shannon limit performance by using an elegant iterative decoding algorithm. This great success stimulated intensive research oil long compound codes sharing the same decoding mechanism. Among these long codes are low-density parity-check (LDPC) code and product code, which render brilliant performance. In this work, iterative decoding algorithms for LDPC code and product code are studied in the context of belief propagation. A large part of this work concerns LDPC code. First the concept of iterative decoding capacity is established in the context of density evolution. Two simulation-based methods approximating decoding capacity are applied to LDPC code. Their effectiveness is evaluated. A suboptimal iterative decoder, Max-Log-MAP algorithm, is also investigated. It has been intensively studied in turbo code but seems to be neglected in LDPC code. The specific density evolution procedure for Max-Log-MAP decoding is developed. The performance of LDPC code with infinite block length is well-predicted using density evolution procedure. Two implementation issues on iterative decoding of LDPC code are studied. One is the design of a quantized decoder. The other is the influence of mismatched signal-to-noise ratio (SNR) level on decoding performance. The theoretical capacities of the quantized LDPC decoder, under Log-MAP and Max-Log-MAP algorithms, are derived through discretized density evolution. It is indicated that the key point in designing a quantized decoder is to pick a proper dynamic range. Quantization loss in terms of bit error rate (BER) performance could be kept remarkably low, provided that the dynamic range is chosen wisely. The decoding capacity under fixed SNR offset is obtained. The robustness of LDPC code with practical length is evaluated through simulations. It is found that the amount of SNR offset that can be tolerated depends on the code length. The remaining part of this dissertation deals with iterative decoding of product code. Two issues on iterative decoding of\u27 product code are investigated. One is, \u27improving BER performance by mitigating cycle effects. The other is, parallel decoding structure, which is conceptually better than serial decoding and yields lower decoding latency

    A study of diffusion in polymers using C-14 labelled molecules

    Get PDF
    A novel method of measuring the self-diffusion coefficient of a penetrant molecule in various polymers was devised. This method makes use of a permeation experiment, where the radioactively labelled moleculee exchange with the chemically identical but nonradioactive molecules through the polymer membrane specimen. The rate of permeation is measured by the excitation of the non-radioactive molecules which act as the solvent in a liquid scintillation mixture, with subsequent excitation and fluorescence of the dissolved scintillator solutes. By maintaining the concentrations of the radioactive and non-radioactive molecules in the vapour phase at the same level, a self-diffusion coefficient D* at one precise penetrant concentration level can be determined. The diffusion coefficients measured were compared to those obtained from conventional sorption-desorption methods, and the comparison was discussed in terms of the basic definitions of the different diffusion coefficients. The diffusional behaviour in silicone rubber and S-B-S block copolymer was discussed mainly with reference to the free volume theory and the activated zone theory, with particular emphasis on chain mobility. Dynamic mechanical studies were also obtained in these polymers and related to the diffusion characteristics through chain mobility and free volume concepts. Diffusion in filled systems and the two-phase S-B-S copolymers was discussed with the help of mathematical models derived for analogous electrical conductivity through hetereogeneous medium, and a certain order of polystyrene domain distribution was indicated in the latter. Silica-silicone rubber interaction was considered and some conclusions were made

    Historical and Conceptual Foundations of Information Physics

    Get PDF
    The main objective of this dissertation is to philosophically assess how the use of informational concepts in the field of classical thermostatistical physics has historically evolved from the late 1940s to the present day. I will first analyze in depth the main notions that form the conceptual basis on which 'informational physics' historically unfolded, encompassing (i) different entropy, probability and information notions, (ii) their multiple interpretative variations, and (iii) the formal, numerical and semantic-interpretative relationships among them. In the following, I will assess the history of informational thermophysics during the second half of the twentieth century. Firstly, I analyse the intellectual factors that gave rise to this current in the late forties (i.e., popularization of Shannon's theory, interest in a naturalized epistemology of science, etc.), then study its consolidation in the Brillouinian and Jaynesian programs, and finally claim how Carnap (1977) and his disciples tried to criticize this tendency within the scientific community. Then, I evaluate how informational physics became a predominant intellectual current in the scientific community in the nineties, made possible by the convergence of Jaynesianism and Brillouinism in proposals such as that of Tribus and McIrvine (1971) or Bekenstein (1973) and the application of algorithmic information theory into the thermophysical domain. As a sign of its radicality at this historical stage, I explore the main proposals to include information as part of our physical reality, such as Wheeler’s (1990), Stonier’s (1990) or Landauer’s (1991), detailing the main philosophical arguments (e.g., Timpson, 2013; Lombardi et al. 2016a) against those inflationary attitudes towards information. Following this historical assessment, I systematically analyze whether the descriptive exploitation of informational concepts has historically contributed to providing us with knowledge of thermophysical reality via (i) explaining thermal processes such as equilibrium approximation, (ii) advantageously predicting thermal phenomena, or (iii) enabling understanding of thermal property such as thermodynamic entropy. I argue that these epistemic shortcomings would make it impossible to draw ontological conclusions in a justified way about the physical nature of information. In conclusion, I will argue that the historical exploitation of informational concepts has not contributed significantly to the epistemic progress of thermophysics. This would lead to characterize informational proposals as 'degenerate science' (à la Lakatos 1978a) regarding classical thermostatistical physics or as theoretically underdeveloped regarding the study of the cognitive dynamics of scientists in this physical domain

    Proceedings of the Workshop on Applications of Distributed System Theory to the Control of Large Space Structures

    Get PDF
    Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification

    Iterative decoding for error resilient wireless data transmission

    Get PDF
    Both turbo codes and LDPC codes form two new classes of codes that offer energy efficiencies close to theoretical limit predicted by Claude Shannon. The features of turbo codes include parallel code catenation, recursive convolutional encoders, punctured convolutional codes and an associated decoding algorithm. The features of LDPC codes include code construction, encoding algorithm, and an associated decoding algorithm. This dissertation specifically describes the process of encoding and decoding for both turbo and LDPC codes and demonstrates the performance comparison between theses two codes in terms of some performance factors. In addition, a more general discussion of iterative decoding is presented. One significant contribution of this dissertation is a study of some major performance factors that intensely contribute in the performance of both turbo codes and LDPC codes. These include Bit Error Rate, latency, code rate and computational resources. Simulation results show the performance of turbo codes and LDPC codes under different performance factors

    An investigation of perception without awareness

    Get PDF
    Following the work of Dixon (1971) some experiments were performed to investigate two aspects of perception without awareness, the handling of emotional stimuli and the relation between subliminal perception and selective attention. Apparatus was designed which utilized the phenomenon of binocular rivalry, so that an image (above identification level when presented alone) could be masked by a brighter image to the other eye and thus perceived without awareness. An experiment of Smith et al. (1959) was replicated with improved controls. It was shown that responses to words presented outside of awareness tended to be meaning-related, the same words yielding structure-related responses when presented supraliminally. Spence (1967) proposed an explanation of perceptual defence in terms of the interaction of arousal and memory. Some experimental support for this idea was obtained. Further experiments on the handling of emotive stimuli led to the conclusion that individual differences in perception are an important factor to be controlled. Similarly, further to Brown (1965,1971) it was concluded that the stimulus characteristics of emotive words used as experimental stimuli need to be better controlled. An explanation of word association phenomena in terms of the interaction of arousal and attention was discussed and the perceptual defence and WAT situations contrasted. Finally, two brief experiments illustrated aspects of the selective attention paradigm relevant to perception without awareness: pre-attentive processes (Neisser, 1967) and incidental stimulation (Eagle et al, 1966). Following a review of selective attention experiments, including evidence of unattended channel processing, some tentative proposals were made which might encompass the material presented. Utilizing a model suggested by MacKay (1972) it was proposed that the phenomena of perception without awareness represent the functioning of an early stage in the normal perceptual process essential both to the handling of emotive stimuli and the selection of inputs to awareness.<p
    corecore