228 research outputs found

    Watermarking for multimedia security using complex wavelets

    Get PDF
    This paper investigates the application of complex wavelet transforms to the field of digital data hiding. Complex wavelets offer improved directional selectivity and shift invariance over their discretely sampled counterparts allowing for better adaptation of watermark distortions to the host media. Two methods of deriving visual models for the watermarking system are adapted to the complex wavelet transforms and their performances are compared. To produce improved capacity a spread transform embedding algorithm is devised, this combines the robustness of spread spectrum methods with the high capacity of quantization based methods. Using established information theoretic methods, limits of watermark capacity are derived that demonstrate the superiority of complex wavelets over discretely sampled wavelets. Finally results for the algorithm against commonly used attacks demonstrate its robustness and the improved performance offered by complex wavelet transforms

    Informed stego-systems in active warden context: statistical undetectability and capacity

    Full text link
    Several authors have studied stego-systems based on Costa scheme, but just a few ones gave both theoretical and experimental justifications of these schemes performance in an active warden context. We provide in this paper a steganographic and comparative study of three informed stego-systems in active warden context: scalar Costa scheme, trellis-coded quantization and spread transform scalar Costa scheme. By leading on analytical formulations and on experimental evaluations, we show the advantages and limits of each scheme in term of statistical undetectability and capacity in the case of active warden. Such as the undetectability is given by the distance between the stego-signal and the cover distance. It is measured by the Kullback-Leibler distance.Comment: 6 pages, 8 figure

    Wide spread spectrum watermarking with side information and interference cancellation

    Full text link
    Nowadays, a popular method used for additive watermarking is wide spread spectrum. It consists in adding a spread signal into the host document. This signal is obtained by the sum of a set of carrier vectors, which are modulated by the bits to be embedded. To extract these embedded bits, weighted correlations between the watermarked document and the carriers are computed. Unfortunately, even without any attack, the obtained set of bits can be corrupted due to the interference with the host signal (host interference) and also due to the interference with the others carriers (inter-symbols interference (ISI) due to the non-orthogonality of the carriers). Some recent watermarking algorithms deal with host interference using side informed methods, but inter-symbols interference problem is still open. In this paper, we deal with interference cancellation methods, and we propose to consider ISI as side information and to integrate it into the host signal. This leads to a great improvement of extraction performance in term of signal-to-noise ratio and/or watermark robustness.Comment: 12 pages, 8 figure

    Oblivious data hiding : a practical approach

    Get PDF
    This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed: 1. Theoretical and practical aspects of embedder-detector design. 2. Performance evaluation, and analysis of performance vs. complexity tradeoffs. 3. Some application specific implementations. A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure. The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively

    Authentication with Distortion Criteria

    Full text link
    In a variety of applications, there is a need to authenticate content that has experienced legitimate editing in addition to potential tampering attacks. We develop one formulation of this problem based on a strict notion of security, and characterize and interpret the associated information-theoretic performance limits. The results can be viewed as a natural generalization of classical approaches to traditional authentication. Additional insights into the structure of such systems and their behavior are obtained by further specializing the results to Bernoulli and Gaussian cases. The associated systems are shown to be substantially better in terms of performance and/or security than commonly advocated approaches based on data hiding and digital watermarking. Finally, the formulation is extended to obtain efficient layered authentication system constructions.Comment: 22 pages, 10 figure
    • 

    corecore