26 research outputs found

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen

    Physical Layer Defenses Against Primary User Emulation Attacks

    Get PDF
    Cognitive Radio (CR) is a promising technology that works by detecting unused parts of the spectrum and automatically reconfiguring the communication system\u27s parameters in order to operate in the available communication channels while minimizing interference. CR enables efficient use of the Radio Frequency (RF) spectrum by generating waveforms that can coexist with existing users in licensed spectrum bands. Spectrum sensing is one of the most important components of CR systems because it provides awareness of its operating environment, as well as detecting the presence of primary (licensed) users of the spectrum

    Simplification Resilient LDPC-Coded Sparse-QIM Watermarking for 3D-Meshes

    Full text link
    We propose a blind watermarking scheme for 3-D meshes which combines sparse quantization index modulation (QIM) with deletion correction codes. The QIM operates on the vertices in rough concave regions of the surface thus ensuring impeccability, while the deletion correction code recovers the data hidden in the vertices which is removed by mesh optimization and/or simplification. The proposed scheme offers two orders of magnitude better performance in terms of recovered watermark bit error rate compared to the existing schemes of similar payloads and fidelity constraints.Comment: Submitted, revised and Copyright transfered to IEEE Transactions on Multimedia, October 9th 201

    ERROR CORRECTION CODE-BASED EMBEDDING IN ADAPTIVE RATE WIRELESS COMMUNICATION SYSTEMS

    Get PDF
    In this dissertation, we investigated the methods for development of embedded channels within error correction mechanisms utilized to support adaptive rate communication systems. We developed an error correction code-based embedding scheme suitable for application in modern wireless data communication standards. We specifically implemented the scheme for both low-density parity check block codes and binary convolutional codes. While error correction code-based information hiding has been previously presented in literature, we sought to take advantage of the fact that these wireless systems have the ability to change their modulation and coding rates in response to changing channel conditions. We utilized this functionality to incorporate knowledge of the channel state into the scheme, which led to an increase in embedding capacity. We conducted extensive simulations to establish the performance of our embedding methodologies. Results from these simulations enabled the development of models to characterize the behavior of the embedded channels and identify sources of distortion in the underlying communication system. Finally, we developed expressions to define limitations on the capacity of these channels subject to a variety of constraints, including the selected modulation type and coding rate of the communication system, the current channel state, and the specific embedding implementation.Commander, United States NavyApproved for public release; distribution is unlimited

    Asymptotic Performance Analysis of a K-Hop Amplify-and-Forward Relay MIMO Channel

    Get PDF
    The present paper studies the asymptotic performance of multi-hop amplify-and-forward relay multiple-antenna communication channels. Each multi-antenna terminal in the network amplifies the received signal, sent by a source, and retransmits it upstream towards a destination. Achievable ergodic rates under both jointly optimal detection and decoding and practical separate decoding schemes for arbitrary signaling schemes, along with the average bit error rate for various receiver structures are derived in the regime where the number of antennas at each terminal grows large without a bound. To overcome the difficulty of averaging over channel realizations we apply large-system analysis based on the replica method from statistical physics. The validity of the large-system analysis is further verified through Monte Carlo simulations of realistic finite-sized systems

    Multiplexing, scheduling, and multicasting strategies for antenna arrays in wireless networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 167-174).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.A transmitter antenna array has the ability to direct data simultaneously to multiple receivers within a wireless network, creating potential for a more integrated view of algorithmic system components. In this thesis, such a perspective informs the design of two system tasks: the scheduling of packets from a number of data streams into groups; and the subsequent spatial multiplexing and encoding of these groups using array processing. We demonstrate how good system designs can help these two tasks reinforce one another, or alternatively enable tradeoffs in complexity between the two. Moreover, scheduling and array processing each benefit from a further awareness of both the fading channel state and certain properties of the data, providing information about key flexibilities, constraints and goals. Our development focuses on techniques that lead to high performance even with very low-complexity receivers. We first consider spatial precoding under simple scheduling and propose several extensions for implementation, such as a unified time-domain precoder that compensates for both cross-channel and intersymbol interfer- ence. We then show how more sophisticated, channel-aware scheduling can reduce the complexity requirements of the array processing. The scheduling algorithms presented are based on the receivers' fading channel realizations and the delay tolerances of the data streams. Finally, we address the multicasting of common data streams in terms of opportunities for reduced redundancy as well as the conflicting objectives inherent in sending to multiple receivers. Our channel-aware extensions of space-time codes for multicasting gain several dB over traditional versions that do not incorporate channel knowledge.by Michael J. Lopez.Ph.D

    Proceedings of the 35th WIC Symposium on Information Theory in the Benelux and the 4th joint WIC/IEEE Symposium on Information Theory and Signal Processing in the Benelux, Eindhoven, the Netherlands May 12-13, 2014

    Get PDF
    Compressive sensing (CS) as an approach for data acquisition has recently received much attention. In CS, the signal recovery problem from the observed data requires the solution of a sparse vector from an underdetermined system of equations. The underlying sparse signal recovery problem is quite general with many applications and is the focus of this talk. The main emphasis will be on Bayesian approaches for sparse signal recovery. We will examine sparse priors such as the super-Gaussian and student-t priors and appropriate MAP estimation methods. In particular, re-weighted l2 and re-weighted l1 methods developed to solve the optimization problem will be discussed. The talk will also examine a hierarchical Bayesian framework and then study in detail an empirical Bayesian method, the Sparse Bayesian Learning (SBL) method. If time permits, we will also discuss Bayesian methods for sparse recovery problems with structure; Intra-vector correlation in the context of the block sparse model and inter-vector correlation in the context of the multiple measurement vector problem

    Multiplexing, Scheduling, and Multicasting Strategies for Antenna Arrays in Wireless Networks

    Get PDF
    Grant number: CCR-9979363A transmitter antenna array has the ability to direct data simultaneously to multiple receivers within a wireless network, creating potential for a more integrated view of algorithmic system components. In this thesis, such a perspective informs the design of two system tasks: the scheduling of packets from a number of data streams into groups; and the subsequent spatial multiplexing and encoding of these groups using array processing. We demonstrate how good system designs can help these two tasks reinforce one another, or alternatively enable tradeoffs in complexity between the two. Moreover, scheduling and array processing each benefit from a further awareness of both the fading channel state and certain properties of the data, providing information about key flexibilities, constraints and goals. Our development focuses on techniques that lead to high performance even with very low-complexity receivers. We first consider spatial precoding under simple scheduling and propose several extensions for implementation, such as a unified timedomain precoder that compensates for both cross-channel and intersymbol interference. We then show how more sophisticated, channel-aware scheduling can reduce the complexity requirements of the array processing. The scheduling algorithms presented are based on the receivers’ fading channel realizations and the delay tolerances of the data streams. Finally, we address the multicasting of common data streams in terms of opportunities for reduced redundancy as well as the conflicting objectives inherent in sending to multiple receivers. Our channel-aware extensions of space-time codes for multicasting gain several dB over traditional versions that do not incorporate channel knowledge.NSF, HP/MIT Alliance

    Systematic hybrid analog/digital signal coding

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.Includes bibliographical references (p. 201-206).This thesis develops low-latency, low-complexity signal processing solutions for systematic source coding, or source coding with side information at the decoder. We consider an analog source signal transmitted through a hybrid channel that is the composition of two channels: a noisy analog channel through which the source is sent unprocessed and a secondary rate-constrained digital channel; the source is processed prior to transmission through the digital channel. The challenge is to design a digital encoder and decoder that provide a minimum-distortion reconstruction of the source at the decoder, which has observations of analog and digital channel outputs. The methods described in this thesis have importance to a wide array of applications. For example, in the case of in-band on-channel (IBOC) digital audio broadcast (DAB), an existing noisy analog communications infrastructure may be augmented by a low-bandwidth digital side channel for improved fidelity, while compatibility with existing analog receivers is preserved. Another application is a source coding scheme which devotes a fraction of available bandwidth to the analog source and the rest of the bandwidth to a digital representation. This scheme is applicable in a wireless communications environment (or any environment with unknown SNR), where analog transmission has the advantage of a gentle roll-off of fidelity with SNR. A very general paradigm for low-latency, low-complexity source coding is composed of three basic cascaded elements: 1) a space rotation, or transformation, 2) quantization, and 3) lossless bitstream coding. The paradigm has been applied with great success to conventional source coding, and it applies equally well to systematic source coding. Focusing on the case involving a Gaussian source, Gaussian channel and mean-squared distortion, we determine optimal or near-optimal components for each of the three elements, each of which has analogous components in conventional source coding. The space rotation can take many forms such as linear block transforms, lapped transforms, or subband decomposition, all for which we derive conditions of optimality. For a very general case we develop algorithms for the design of locally optimal quantizers. For the Gaussian case, we describe a low-complexity scalar quantizer, the nested lattice scalar quantizer, that has performance very near that of the optimal systematic scalar quantizer. Analogous to entropy coding for conventional source coding, Slepian-Wolf coding is shown to be an effective lossless bitstream coding stage for systematic source coding.by Richard J. Barron.Ph.D

    Privacy-preserving information hiding and its applications

    Get PDF
    The phenomenal advances in cloud computing technology have raised concerns about data privacy. Aided by the modern cryptographic techniques such as homomorphic encryption, it has become possible to carry out computations in the encrypted domain and process data without compromising information privacy. In this thesis, we study various classes of privacy-preserving information hiding schemes and their real-world applications for cyber security, cloud computing, Internet of things, etc. Data breach is recognised as one of the most dreadful cyber security threats in which private data is copied, transmitted, viewed, stolen or used by unauthorised parties. Although encryption can obfuscate private information against unauthorised viewing, it may not stop data from illegitimate exportation. Privacy-preserving Information hiding can serve as a potential solution to this issue in such a manner that a permission code is embedded into the encrypted data and can be detected when transmissions occur. Digital watermarking is a technique that has been used for a wide range of intriguing applications such as data authentication and ownership identification. However, some of the algorithms are proprietary intellectual properties and thus the availability to the general public is rather limited. A possible solution is to outsource the task of watermarking to an authorised cloud service provider, that has legitimate right to execute the algorithms as well as high computational capacity. Privacypreserving Information hiding is well suited to this scenario since it is operated in the encrypted domain and hence prevents private data from being collected by the cloud. Internet of things is a promising technology to healthcare industry. A common framework consists of wearable equipments for monitoring the health status of an individual, a local gateway device for aggregating the data, and a cloud server for storing and analysing the data. However, there are risks that an adversary may attempt to eavesdrop the wireless communication, attack the gateway device or even access to the cloud server. Hence, it is desirable to produce and encrypt the data simultaneously and incorporate secret sharing schemes to realise access control. Privacy-preserving secret sharing is a novel research for fulfilling this function. In summary, this thesis presents novel schemes and algorithms, including: • two privacy-preserving reversible information hiding schemes based upon symmetric cryptography using arithmetic of quadratic residues and lexicographic permutations, respectively. • two privacy-preserving reversible information hiding schemes based upon asymmetric cryptography using multiplicative and additive privacy homomorphisms, respectively. • four predictive models for assisting the removal of distortions inflicted by information hiding based respectively upon projection theorem, image gradient, total variation denoising, and Bayesian inference. • three privacy-preserving secret sharing algorithms with different levels of generality
    corecore