8 research outputs found

    Information theoretic analysis of watermarking systems

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 185-193).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Watermarking models a copyright protection mechanism where an original data sequence is modified before distribution to the public in order to embed some extra information. The embedding should be transparent (i.e., the modified data should be similar to the original data) and robust (i.e., the information should be recoverable even if the data is modified further). In this thesis, we describe the information-theoretic capacity of such a system as a function of the statistics of the data to be watermarked and the desired level of transparency and robustness. That is, we view watermarking from a communication perspective and describe the maximum bit-rate that can be reliably transmitted from encoder to decoder. We make the conservative assumption that there is a malicious attacker who knows how the watermarking system works and who attempts to design a forgery that is similar to the original data but that does not contain the watermark. Conversely, the watermarking system must meet its performance criteria for any feasible attacker and would like to force the attacker to effectively destroy the data in order to remove the watermark. Watermarking can thus be viewed as a dynamic game between these two players who are trying to minimize and maximize, respectively, the amount of information that can be reliably embedded. We compute the capacity for several scenarios, focusing largely on Gaussian data and a squared difference similarity measure.(cont.) In contrast to many suggested watermarking techniques that view the original data as interference, we find that the capacity increases with the uncertainty in the original data. Indeed, we find that out of all distributions with the same variance, a Gaussian distribution on the original data results in the highest capacity. Furthermore, for Gaussian data, the capacity increases with its variance. One surprising result is that with Gaussian data the capacity does not increase if the original data can be used to decode the watermark. This is reminiscent of a similar model, Costa's "writing on dirty paper", in which the attacker simply adds independent Gaussian noise. Unlike with a more sophisticated attacker, we show that the capacity does not change for Costa's model if the original data is not Gaussian.by Aaron Seth Cohen.Ph.D

    Digital watermarking, information embedding, and data hiding systems

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 139-142).Digital watermarking, information embedding, and data hiding systems embed information, sometimes called a digital watermark, inside a host signal, which is typically an image, audio signal, or video signal. The host signal is not degraded unacceptably in the process, and one can recover the watermark even if the composite host and watermark signal undergo a variety of corruptions and attacks as long as these corruptions do not unacceptably degrade the host signal. These systems play an important role in meeting at least three major challenges that result from the widespread use of digital communication networks to disseminate multimedia content: (1) the relative ease with which one can generate perfect copies of digital signals creates a need for copyright protection mechanisms, (2) the relative ease with which one can alter digital signals creates a need for authentication and tamper-detection methods, and (3) the increase in sheer volume of transmitted data creates a demand for bandwidth-efficient methods to either backwards-compatibly increase capacities of existing legacy networks or deploy new networks backwards-compatibly with legacy networks. We introduce a framework within which to design and analyze digital watermarking and information embedding systems. In this framework performance is characterized by achievable rate-distortion-robustness trade-offs, and this framework leads quite naturally to a new class of embedding methods called quantization index modulation (QIM). These QIM methods, especially when combined with postprocessing called distortion compensation, achieve provably better rate-distortion-robustness performance than previously proposed classes of methods such as spread spectrum methods and generalized low-bit modulation methods in a number of different scenarios, which include both intentional and unintentional attacks. Indeed, we show that distortion-compensated QIM methods can achieve capacity, the information-theoretically best possible rate-distortion-robustness performance, against both additive Gaussian noise attacks and arbitrary squared error distortion-constrained attacks. These results also have implications for the problem of communicating over broadcast channels. We also present practical implementations of QIM methods called dither modulation and demonstrate their performance both analytically and through empirical simulations.by Brian Chen.Ph.D

    Successive structuring of source coding algorithms for data fusion, buffering, and distribution in networks

    Get PDF
    Supervised by Gregory W. Wornell.Also issued as Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 159-165).(cont.) We also explore the interactions between source coding and queue management in problems of buffering and distributing distortion-tolerant data. We formulate a general queuing model relevant to numerous communication scenarios, and develop a bound on the performance of any algorithm. We design an adaptive buffer-control algorithm for use in dynamic environments and under finite memory limitations; its performance closely approximates the bound. Our design uses multiresolution source codes that exploit the data's distortion-tolerance in minimizing end-to-end distortion. Compared to traditional approaches, the performance gains of the adaptive algorithm are significant - improving distortion, delay, and overall system robustness.by Stark Christiaan Draper

    Systematic hybrid analog/digital signal coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D

    Algorithms and architecture for multiusers, multi-terminal, multi-layer information theoretic security

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 161-164).As modern infrastructure systems become increasingly more complex, we are faced with many new challenges in the area of information security. In this thesis we examine some approaches to security based on ideas from information theory. The protocols considered in this thesis, build upon the "wiretap channel," a model for physical layer security proposed by A. Wyner in 1975. At a higher level, the protocols considered here can strengthen existing mechanisms for security by providing a new location based approach at the physical layer.In the first part of this thesis, we extend the wiretap channel model to the case when there are multiple receivers, each experiencing a time varying fading channel. Both the scenario when each legitimate receiver wants a common message as well as the scenario when they all want separate messages are studied and capacity results are established in several special cases. When each receiver wants a separate independent message, an opportunistic scheme that transmits to the strongest user at each time, and uses Gaussian codebooks is shown to achieve the sum secrecy capacity in the limit of many users. When each receiver wants a common message, a lower bound to the capacity is provided, independent of the number of receivers. In the second part of the thesis the role of multiple antennas for secure communication studied. We establish the secrecy capacity of the multi antenna wiretap channel (MIMOME channel), when the channel matrices of the legitimate receiver and eavesdropper are fixed and known to all the terminals. To establish the capacity, a new computable upper bound on the secrecy capacity of the wiretap channel is developed, which may be of independent interest. It is shown that Gaussian codebooks suffice to attain the capacity for this problem. For the case when the legitimate receiver has a single antenna (MISOME channel) a rank one transmission scheme is shown to attain the capacity.(CONT.) In the high signal-to-noise ratio (SNR) regime, it is shown that a capacity achieving scheme involves simultaneous diagonalization of the channel matrices using the generalized singular value decomposition and independently coding accross the resulting parallel channels. Furthermore a semi-blind masked beamforming scheme is studied, which transmits signal of interest in the subspace of the legitimate receiver's channel and synthetic noise in the orthogonal subspace. It is shown that this scheme is nearly optimal in the high SNR regime for the MISOME case and the performance penalty for the MIMOME channel is evaluated in terms of the generalized singular values. The behavior of the secrecy capacity in the limit of many antennas is also studied. When the channel matrices have i.i.d. CN(O, 1) entries, we show that (1) the secrecy capacity for the MISOME channel converges (almost surely) to zero if and only if the eavesdropper increases its antennas at a rate twice as fast as the sender (2) when a total of T >> 1 antennas have to be allocated between the sender and the receiver, the optimal allocation, which maximizes the number of eavesdropping antennas for zero secrecy capacity is 2 : 1. In the final part of the thesis, we consider a variation of the wiretap channel where the sender and legitimate receiver also have access to correlated source sequences. They use both the sources and the structure of the underlying channel to extract secret keys. We provide general upper and lower bounds on the secret key rate and establish the capacity for the reversely degraded case.by Ashish Khisti.Ph.D

    D11.2 Consolidated results on the performance limits of wireless communications

    Get PDF
    Deliverable D11.2 del projecte europeu NEWCOM#The report presents the Intermediate Results of N# JRAs on Performance Limits of Wireless Communications and highlights the fundamental issues that have been investigated by the WP1.1. The report illustrates the Joint Research Activities (JRAs) already identified during the first year of the project which are currently ongoing. For each activity there is a description, an illustration of the adherence and relevance with the identified fundamental open issues, a short presentation of the preliminary results, and a roadmap for the joint research work in the next year. Appendices for each JRA give technical details on the scientific activity in each JRA.Peer ReviewedPreprin
    corecore