19 research outputs found

    Digital watermarking, information embedding, and data hiding systems

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 139-142).Digital watermarking, information embedding, and data hiding systems embed information, sometimes called a digital watermark, inside a host signal, which is typically an image, audio signal, or video signal. The host signal is not degraded unacceptably in the process, and one can recover the watermark even if the composite host and watermark signal undergo a variety of corruptions and attacks as long as these corruptions do not unacceptably degrade the host signal. These systems play an important role in meeting at least three major challenges that result from the widespread use of digital communication networks to disseminate multimedia content: (1) the relative ease with which one can generate perfect copies of digital signals creates a need for copyright protection mechanisms, (2) the relative ease with which one can alter digital signals creates a need for authentication and tamper-detection methods, and (3) the increase in sheer volume of transmitted data creates a demand for bandwidth-efficient methods to either backwards-compatibly increase capacities of existing legacy networks or deploy new networks backwards-compatibly with legacy networks. We introduce a framework within which to design and analyze digital watermarking and information embedding systems. In this framework performance is characterized by achievable rate-distortion-robustness trade-offs, and this framework leads quite naturally to a new class of embedding methods called quantization index modulation (QIM). These QIM methods, especially when combined with postprocessing called distortion compensation, achieve provably better rate-distortion-robustness performance than previously proposed classes of methods such as spread spectrum methods and generalized low-bit modulation methods in a number of different scenarios, which include both intentional and unintentional attacks. Indeed, we show that distortion-compensated QIM methods can achieve capacity, the information-theoretically best possible rate-distortion-robustness performance, against both additive Gaussian noise attacks and arbitrary squared error distortion-constrained attacks. These results also have implications for the problem of communicating over broadcast channels. We also present practical implementations of QIM methods called dither modulation and demonstrate their performance both analytically and through empirical simulations.by Brian Chen.Ph.D

    Watermarking for multimedia security using complex wavelets

    Get PDF
    This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms

    Quantization Watermarking for Joint Compression and Data Hiding Schemes

    Get PDF
    International audienceEnrichment and protection of JPEG2000 images is an important issue. Data hiding techniques are a good solution to solve these problems. In this context, we can consider the joint approach to introduce data hiding technique into JPEG2000 coding pipeline. Data hiding consists of imperceptibly altering multimedia content, to convey some information. This process is done in such a way that the hidden data is not perceptible to an observer. Digital watermarking is one type of data hiding. In addition to the imperceptibility and payload constraints, the watermark should be robust against a variety of manipulations or attacks. We focus on trellis coded quantization (TCQ) data hiding techniques and propose two JPEG2000 compression and data hiding schemes. The properties of TCQ quantization, defined in JPEG2000 part 2, are used to perform quantization and information embedding during the same time. The first scheme is designed for content description and management applications with the objective of achieving high payloads. The compression rate/imperceptibility/payload trade off is our main concern. The second joint scheme has been developed for robust watermarking and can have consequently many applications. We achieve the better imperceptibility/robustness trade off in the context of JPEG2000 compression. We provide some experimental results on the implementation of these two schemes

    A constructive and unifying framework for zero-bit watermarking

    Get PDF
    In the watermark detection scenario, also known as zero-bit watermarking, a watermark, carrying no hidden message, is inserted in content. The watermark detector checks for the presence of this particular weak signal in content. The article looks at this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermark signal is a function of the host content. Our study is twofold. The first step is to design the best embedding function for a given detection function, and the best detection function for a given embedding function. This yields two conditions, which are mixed into one `fundamental' partial differential equation. It appears that many famous watermarking schemes are indeed solution to this `fundamental' equation. This study thus gives birth to a constructive framework unifying solutions, so far perceived as very different.Comment: submitted to IEEE Trans. on Information Forensics and Securit

    Sensor Data Integrity Verification for Real-time and Resource Constrained Systems

    Full text link
    Sensors are used in multiple applications that touch our lives and have become an integral part of modern life. They are used in building intelligent control systems in various industries like healthcare, transportation, consumer electronics, military, etc. Many mission-critical applications require sensor data to be secure and authentic. Sensor data security can be achieved using traditional solutions like cryptography and digital signatures, but these techniques are computationally intensive and cannot be easily applied to resource constrained systems. Low complexity data hiding techniques, on the contrary, are easy to implement and do not need substantial processing power or memory. In this applied research, we use and configure the established low complexity data hiding techniques from the multimedia forensics domain. These techniques are used to secure the sensor data transmissions in resource constrained and real-time environments such as an autonomous vehicle. We identify the areas in an autonomous vehicle that require sensor data integrity and propose suitable water-marking techniques to verify the integrity of the data and evaluate the performance of the proposed method against different attack vectors. In our proposed method, sensor data is embedded with application specific metadata and this process introduces some distortion. We analyze this embedding induced distortion and its impact on the overall sensor data quality to conclude that watermarking techniques, when properly configured, can solve sensor data integrity verification problems in an autonomous vehicle.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167387/3/Raghavendar Changalvala Final Dissertation.pdfDescription of Raghavendar Changalvala Final Dissertation.pdf : Dissertatio

    A Joint Coding and Embedding Framework for Multimedia Fingerprinting

    Get PDF
    Technology advancement has made multimedia content widely available and easy to process. These benefits also bring ease to unauthorized users who can duplicate and manipulate multimedia content, and redistribute it to a large audience. Unauthorized distribution of information has posed serious threats to government and commercial operations. Digital fingerprinting is an emerging technology to protect multimedia content from such illicit redistribution by uniquely marking every copy of the content distributed to each user. One of the most powerful attacks from adversaries is collusion attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. An ideal fingerprinting system should be able to resist such collusion attacks and also have low embedding and detection computational complexity, and require low transmission bandwidth. To achieve aforementioned requirements, this thesis presents a joint coding and embedding framework by employing a code layer for efficient fingerprint construction and leveraging the embedding layer to achieve high collusion resistance. Based on this framework, we propose two new joint-coding-embedding techniques, namely, permuted subsegment embedding and group-based joint-coding-embedding fingerprinting. We show that the proposed fingerprinting framework provides an excellent balance between collusion resistance, efficient construction, and efficient detection. The proposed joint coding and embedding techniques allow us to model both coded and non-coded fingerprinting under the same theoretical model, which can be used to provide guidelines of choosing parameters. Based on the proposed joint coding and embedding techniques, we then consider real-world applications, such as DVD movie mass distribution and cable TV, and develop practical algorithms to fingerprint video in such challenging practical settings as to accommodate more than ten million users and resist hundreds of users' collusion. Our studies show a high potential of joint coding and embedding to meet the needs of real-world large-scale fingerprinting applications. The popularity of the subscription based content services, such as cable TV, inspires us to study the content protection in such scenario where users have access to multiple contents and thus the colluders may pirate multiple movie signals. To address this issue, we exploit the temporal dimension and propose a dynamic fingerprinting scheme that adjusts the fingerprint design based on the detection results of previously pirated signals. We demonstrate the advantages of the proposed dynamic fingerprinting over conventional static fingerprinting. Other issues related to multimedia fingerprinting, such as fingerprinting via QIM embedding, are also discussed in this thesis

    Recent Advances in Signal Processing

    Get PDF
    The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
    corecore