11 research outputs found

    Nested turbo codes for the costa problem

    Get PDF
    Driven by applications in data-hiding, MIMO broadcast channel coding, precoding for interference cancellation, and transmitter cooperation in wireless networks, Costa coding has lately become a very active research area. In this paper, we first offer code design guidelines in terms of source- channel coding for algebraic binning. We then address practical code design based on nested lattice codes and propose nested turbo codes using turbo-like trellis-coded quantization (TCQ) for source coding and turbo trellis-coded modulation (TTCM) for channel coding. Compared to TCQ, turbo-like TCQ offers structural similarity between the source and channel coding components, leading to more efficient nesting with TTCM and better source coding performance. Due to the difference in effective dimensionality between turbo-like TCQ and TTCM, there is a performance tradeoff between these two components when they are nested together, meaning that the performance of turbo-like TCQ worsens as the TTCM code becomes stronger and vice versa. Optimization of this performance tradeoff leads to our code design that outperforms existing TCQ/TCM and TCQ/TTCM constructions and exhibits a gap of 0.94, 1.42 and 2.65 dB to the Costa capacity at 2.0, 1.0, and 0.5 bits/sample, respectively

    Information Forensics and Security: A quarter-century-long journey

    Get PDF
    Information forensics and security (IFS) is an active R&D area whose goal is to ensure that people use devices, data, and intellectual properties for authorized purposes and to facilitate the gathering of solid evidence to hold perpetrators accountable. For over a quarter century, since the 1990s, the IFS research area has grown tremendously to address the societal needs of the digital information era. The IEEE Signal Processing Society (SPS) has emerged as an important hub and leader in this area, and this article celebrates some landmark technical contributions. In particular, we highlight the major technological advances by the research community in some selected focus areas in the field during the past 25 years and present future trends

    Contribution to the construction of fingerprinting and watermarking schemes to protect mobile agents and multimedia content

    Get PDF
    The main characteristic of fingerprinting codes is the need of high error-correction capacity due to the fact that they are designed to avoid collusion attacks which will damage many symbols from the codewords. Moreover, the use of fingerprinting schemes depends on the watermarking system that is used to embed the codeword into the content and how it honors the marking assumption. In this sense, even though fingerprinting codes were mainly used to protect multimedia content, using them on software protection systems seems an option to be considered. This thesis, studies how to use codes which have iterative-decoding algorithms, mainly turbo-codes, to solve the fingerprinting problem. Initially, it studies the effectiveness of current approaches based on concatenating tradicioanal fingerprinting schemes with convolutional codes and turbo-codes. It is shown that these kind of constructions ends up generating a high number of false positives. Even though this thesis contains some proposals to improve these schemes, the direct use of turbo-codes without using any concatenation with a fingerprinting code as inner code has also been considered. It is shown that the performance of turbo-codes using the appropiate constituent codes is a valid alternative for environments with hundreds of users and 2 or 3 traitors. As constituent codes, we have chosen low-rate convolutional codes with maximum free distance. As for how to use fingerprinting codes with watermarking schemes, we have studied the option of using watermarking systems based on informed coding and informed embedding. It has been discovered that, due to different encodings available for the same symbol, its applicability to embed fingerprints is very limited. On this sense, some modifications to these systems have been proposed in order to properly adapt them to fingerprinting applications. Moreover the behavior and impact over a video produced as a collusion of 2 users by the YouTube’s s ervice has been s tudied. We have also studied the optimal parameters for viable tracking of users who have used YouTube and conspired to redistribute copies generated by a collusion attack. Finally, we have studied how to implement fingerprinting schemes and software watermarking to fix the problem of malicious hosts on mobile agents platforms. In this regard, four different alternatives have been proposed to protect the agent depending on whether you want only detect the attack or avoid it in real time. Two of these proposals are focused on the protection of intrusion detection systems based on mobile agents. Moreover, each of these solutions has several implications in terms of infrastructure and complexity.Els codis fingerprinting es caracteritzen per proveir una alta capacitat correctora ja que han de fer front a atacs de confabulació que malmetran una part important dels símbols de la paraula codi. D'atra banda, la utilització de codis de fingerprinting en entorns reals està subjecta a que l'esquema de watermarking que gestiona la incrustació sigui respectuosa amb la marking assumption. De la mateixa manera, tot i que el fingerprinting neix de la protecció de contingut multimèdia, utilitzar-lo en la protecció de software comença a ser una aplicació a avaluar. En aquesta tesi s'ha estudiat com aplicar codis amb des codificació iterativa, concretament turbo-codis, al problema del rastreig de traïdors en el context del fingerprinting digital. Inicialment s'ha qüestionat l'eficàcia dels enfocaments actuals en la utilització de codis convolucionals i turbo-codis que plantegen concatenacions amb esquemes habituals de fingerprinting. S'ha demostrat que aquest tipus de concatenacions portaven, de forma implícita, a una elevada probabilitat d'inculpar un usuari innocent. Tot i que s'han proposat algunes millores sobre aquests esquemes , finalment s'ha plantejat l'ús de turbocodis directament, evitant així la concatenació amb altres esquemes de fingerprinting. S'ha demostrat que, si s'utilitzen els codis constituents apropiats, el rendiment del turbo-descodificador és suficient per a ser una alternativa aplicable en entorns amb varis centenars d'usuaris i 2 o 3 confabuladors . Com a codis constituents s'ha optat pels codis convolucionals de baix ràtio amb distància lliure màxima. Pel que fa a com utilitzar els codis de fingerprinting amb esquemes de watermarking, s'ha estudiat l'opció d'utilitzar sistemes de watermarking basats en la codificació i la incrustació informada. S'ha comprovat que, degut a la múltiple codificació del mateix símbol, la seva aplicabilitat per incrustar fingerprints és molt limitada. En aquest sentit s'ha plantejat algunes modificacions d'aquests sistemes per tal d'adaptar-los correctament a aplicacions de fingerprinting. D'altra banda s'ha avaluat el comportament i l'impacte que el servei de YouTube produeix sobre un vídeo amb un fingerprint incrustat. A més , s'ha estudiat els paràmetres òptims per a fer viable el rastreig d'usuaris que han confabulat i han utilitzat YouTube per a redistribuir la copia fruït de la seva confabulació. Finalment, s'ha estudiat com aplicar els esquemes de fingerprinting i watermarking de software per solucionar el problema de l'amfitrió maliciós en agents mòbils . En aquest sentit s'han proposat quatre alternatives diferents per a protegir l'agent en funció de si és vol només detectar l'atac o evitar-lo en temps real. Dues d'aquestes propostes es centren en la protecció de sistemes de detecció d'intrusions basats en agents mòbils. Cadascuna de les solucions té diverses implicacions a nivell d'infrastructura i de complexitat.Postprint (published version

    Digital watermark technology in security applications

    Get PDF
    With the rising emphasis on security and the number of fraud related crimes around the world, authorities are looking for new technologies to tighten security of identity. Among many modern electronic technologies, digital watermarking has unique advantages to enhance the document authenticity. At the current status of the development, digital watermarking technologies are not as matured as other competing technologies to support identity authentication systems. This work presents improvements in performance of two classes of digital watermarking techniques and investigates the issue of watermark synchronisation. Optimal performance can be obtained if the spreading sequences are designed to be orthogonal to the cover vector. In this thesis, two classes of orthogonalisation methods that generate binary sequences quasi-orthogonal to the cover vector are presented. One method, namely "Sorting and Cancelling" generates sequences that have a high level of orthogonality to the cover vector. The Hadamard Matrix based orthogonalisation method, namely "Hadamard Matrix Search" is able to realise overlapped embedding, thus the watermarking capacity and image fidelity can be improved compared to using short watermark sequences. The results are compared with traditional pseudo-randomly generated binary sequences. The advantages of both classes of orthogonalisation inethods are significant. Another watermarking method that is introduced in the thesis is based on writing-on-dirty-paper theory. The method is presented with biorthogonal codes that have the best robustness. The advantage and trade-offs of using biorthogonal codes with this watermark coding methods are analysed comprehensively. The comparisons between orthogonal and non-orthogonal codes that are used in this watermarking method are also made. It is found that fidelity and robustness are contradictory and it is not possible to optimise them simultaneously. Comparisons are also made between all proposed methods. The comparisons are focused on three major performance criteria, fidelity, capacity and robustness. aom two different viewpoints, conclusions are not the same. For fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal codes has very strong advantage to preserve image fidelity and the advantage of capacity performance is also significant. However, from the power ratio point of view, the orthogonalisation methods demonstrate significant advantage on capacity and robustness. The conclusions are contradictory but together, they summarise the performance generated by different design considerations. The synchronisation of watermark is firstly provided by high contrast frames around the watermarked image. The edge detection filters are used to detect the high contrast borders of the captured image. By scanning the pixels from the border to the centre, the locations of detected edges are stored. The optimal linear regression algorithm is used to estimate the watermarked image frames. Estimation of the regression function provides rotation angle as the slope of the rotated frames. The scaling is corrected by re-sampling the upright image to the original size. A theoretically studied method that is able to synchronise captured image to sub-pixel level accuracy is also presented. By using invariant transforms and the "symmetric phase only matched filter" the captured image can be corrected accurately to original geometric size. The method uses repeating watermarks to form an array in the spatial domain of the watermarked image and the the array that the locations of its elements can reveal information of rotation, translation and scaling with two filtering processes

    Source-channel coding for robust image transmission and for dirty-paper coding

    Get PDF
    In this dissertation, we studied two seemingly uncorrelated, but conceptually related problems in terms of source-channel coding: 1) wireless image transmission and 2) Costa ("dirty-paper") code design. In the first part of the dissertation, we consider progressive image transmission over a wireless system employing space-time coded OFDM. The space-time coded OFDM system based on a newly built broadband MIMO fading model is theoretically evaluated by assuming perfect channel state information (CSI) at the receiver for coherent detection. Then an adaptive modulation scheme is proposed to pick the constellation size that offers the best reconstructed image quality for each average signal-to-noise ratio (SNR). A more practical scenario is also considered without the assumption of perfect CSI. We employ low-complexity decision-feedback decoding for differentially space- time coded OFDM systems to exploit transmitter diversity. For JSCC, we adopt a product channel code structure that is proven to provide powerful error protection and bursty error correction. To further improve the system performance, we also apply the powerful iterative (turbo) coding techniques and propose the iterative decoding of differentially space-time coded multiple descriptions of images. The second part of the dissertation deals with practical dirty-paper code designs. We first invoke an information-theoretical interpretation of algebraic binning and motivate the code design guidelines in terms of source-channel coding. Then two dirty-paper code designs are proposed. The first is a nested turbo construction based on soft-output trellis-coded quantization (SOTCQ) for source coding and turbo trellis- coded modulation (TTCM) for channel coding. A novel procedure is devised to balance the dimensionalities of the equivalent lattice codes corresponding to SOTCQ and TTCM. The second dirty-paper code design employs TCQ and IRA codes for near-capacity performance. This is done by synergistically combining TCQ with IRA codes so that they work together as well as they do individually. Our TCQ/IRA design approaches the dirty-paper capacity limit at the low rate regime (e.g., < 1:0 bit/sample), while our nested SOTCQ/TTCM scheme provides the best performs so far at medium-to-high rates (e.g., >= 1:0 bit/sample). Thus the two proposed practical code designs are complementary to each other

    An Asymmetric Watermarking Method

    Get PDF
    Special Issue on Signal Processing for Data Hiding in Digital Media and Secure Content DeliveryThis article presents an asymmetric watermarking method as an alternative to classical Direct Sequence Spread Spectrum and Watermarking Costa Schemes techniques. This new method provides a higher security level against malicious attacks threatening watermarking techniques used for a copy protection purpose. This application, which is quite different from the classical copyright enforcement issue, is extremely challenging as no public algorithm is so far known to be secure enough and some proposed proprietary techniques have been already hacked. Our method is thus a try towards the proof that the Kerckhoffs principle can be stated in the copy protection framework

    Sensor Data Integrity Verification for Real-time and Resource Constrained Systems

    Full text link
    Sensors are used in multiple applications that touch our lives and have become an integral part of modern life. They are used in building intelligent control systems in various industries like healthcare, transportation, consumer electronics, military, etc. Many mission-critical applications require sensor data to be secure and authentic. Sensor data security can be achieved using traditional solutions like cryptography and digital signatures, but these techniques are computationally intensive and cannot be easily applied to resource constrained systems. Low complexity data hiding techniques, on the contrary, are easy to implement and do not need substantial processing power or memory. In this applied research, we use and configure the established low complexity data hiding techniques from the multimedia forensics domain. These techniques are used to secure the sensor data transmissions in resource constrained and real-time environments such as an autonomous vehicle. We identify the areas in an autonomous vehicle that require sensor data integrity and propose suitable water-marking techniques to verify the integrity of the data and evaluate the performance of the proposed method against different attack vectors. In our proposed method, sensor data is embedded with application specific metadata and this process introduces some distortion. We analyze this embedding induced distortion and its impact on the overall sensor data quality to conclude that watermarking techniques, when properly configured, can solve sensor data integrity verification problems in an autonomous vehicle.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttp://deepblue.lib.umich.edu/bitstream/2027.42/167387/3/Raghavendar Changalvala Final Dissertation.pdfDescription of Raghavendar Changalvala Final Dissertation.pdf : Dissertatio

    Robust digital image watermarking algorithms for copyright protection

    Get PDF
    Digital watermarking has been proposed as a solution to the problem of resolving copyright ownership of multimedia data (image, audio, video). The work presented in this thesis is concerned with the design of robust digital image watermarking algorithms for copyright protection. Firstly, an overview of the watermarking system, applications of watermarks as well as the survey of current watermarking algorithms and attacks, are given. Further, the implementation of feature point detectors in the field of watermarking is introduced. A new class of scale invariant feature point detectors is investigated and it is showed that they have excellent performances required for watermarking. The robustness of the watermark on geometrical distortions is very important issue in watermarking. In order to detect the parameters of undergone affine transformation, we propose an image registration technique which is based on use of the scale invariant feature point detector. Another proposed technique for watermark synchronization is also based on use of scale invariant feature point detector. This technique does not use the original image to determine the parameters of affine transformation which include rotation and scaling. It is experimentally confirmed that this technique gives excellent results under tested geometrical distortions. In the thesis, two different watermarking algorithms are proposed in the wavelet domain. The first algorithm belongs to the class of additive watermarking algorithms which requires the presence of original image for watermark detection. Using this algorithm the influence of different error correction codes on the watermark robustness is investigated. The second algorithm does not require the original image for watermark detection. The robustness of this algorithm is tested on various filtering and compression attacks. This algorithm is successfully combined with the aforementioned synchronization technique in order to achieve the robustness on geometrical attacks. The last watermarking algorithm presented in the thesis is developed in complex wavelet domain. The complex wavelet transform is described and its advantages over the conventional discrete wavelet transform are highlighted. The robustness of the proposed algorithm was tested on different class of attacks. Finally, in the thesis the conclusion is given and the main future research directions are suggested

    Contribution des filtres LPTV et des techniques d'interpolation au tatouage numérique

    Get PDF
    Les Changements d'Horloge Périodiques (PCC) et les filtres Linéaires Variant Périodiquement dans le Temps (LPTV) sont utilisés dans le domaine des télécommunications multi-utilisateurs. Dans cette thèse, nous montrons que, dans l'ensemble des techniques de tatouage par étalement de spectre, ils peuvent se substituer à la modulation par code pseudo-aléatoire. Les modules de décodage optimal, de resynchronisation, de pré-annulation des interférences et de quantification de la transformée d'étalement s'appliquent également aux PCC et aux filtres LPTV. Pour le modèle de signaux stationnaires blancs gaussiens, ces techniques présentent des performances identiques à l'étalement à Séquence Directe (DS) classique. Cependant, nous montrons que, dans le cas d'un signal corrélé localement, la luminance d'une image naturelle notamment, la périodicité des PCC et des filtres LPTV associée à un parcours d'image de type Peano-Hilbert conduit à de meilleures performances. Les filtres LPTV sont en outre un outil plus puissant qu'une simple modulation DS. Nous les utilisons pour effectuer un masquage spectral simultanément à l'étalement, ainsi qu'un rejet des interférences de l'image dans le domaine spectral. Cette dernière technique possède de très bonnes performances au décodage. Le second axe de cette thèse est l'étude des liens entre interpolation et tatouage numérique. Nous soulignons d'abord le rôle de l'interpolation dans les attaques sur la robustesse du tatouage. Nous construisons ensuite des techniques de tatouage bénéficiant des propriétés perceptuelles de l'interpolation. La première consiste en des masques perceptuels utilisant le bruit d'interpolation. Dans la seconde, un schéma de tatouage informé est construit autour de l'interpolation. Cet algorithme, qu'on peut relier aux techniques de catégorisation aléatoire, utilise des règles d'insertion et de décodage originales, incluant un masquage perceptuel intrinsèque. Outre ces bonnes propriétés perceptuelles, il présente un rejet des interférences de l'hôte et une robustesse à diverses attaques telles que les transformations valumétriques. Son niveau de sécurité est évalué à l'aide d'algorithmes d'attaque pratiques. ABSTRACT : Periodic Clock Changes (PCC) and Linear Periodically Time Varying (LPTV) filters have previously been applied to multi-user telecommunications in the Signal and Communications group of IRIT laboratory. In this thesis, we show that in each digital watermarking scheme involving spread-spectrum, they can be substituted to modulation by a pseudo-noise. The additional steps of optimal decoding, resynchronization, pre-cancellation of interference and quantization of a spread transform apply also to PCCs and LPTV filters. For white Gaussian stationary signals, these techniques offer similar performance as classical Direct Sequence (DS) spreading. However we show that, in the case of locally correlated signals such as image luminance, the periodicity of PCCs and LPTV filters associated to a Peano-Hilbert scan leads to better performance. Moreover, LPTV filters are a more powerful tool than simple DS modulation. We use LPTV filters to conduct spectrum masking simultaneous to spreading, as well as image interference cancellation in the spectral domain. The latter technique offers good decoding performance. The second axis of this thesis is the study of the links between interpolation and digital watermarking.We stress the role of interpolation in attacks on the watermark.We propose then watermarking techniques that benefit from interpolation perceptual properties. The first technique consists in constructing perceptualmasks proportional to an interpolation error. In the second technique, an informed watermarking scheme derives form interpolation. This scheme exhibits good perceptual properties, host-interference rejection and robustness to various attacks such as valumetric transforms. Its security level is assessed by ad hoc practical attack algorithms

    Audio Informed Watermarking by means of Dirty Trellis Codes

    No full text
    Abstract—We present a frequency-domain audio watermarking scheme based on dirty convolutional codes. In the scenario addressed by the paper, a masking threshold is proprely defined to allow the identification of the inaudibility of the inserted data. In particular, the masking threshold defines the maximum modification which can applied to each frequency sample. This represents a major deviation from classical distortion models, in which inaudibility is defined in terms of Mean Square Error (MSE), thus making the direct application of the dirty coding paradigm, derived from a theoretical perspective, problematic. To get around this problem, we first define an informed watermarking scheme based on trellis codes, in which the same information is represented by several paths of the trellis. Then, we determine both the specific structure of the codes and the algorithm for information embedding. The proposed scheme is proved to be robust to D/A and A/D conversion, multipath, scaling, noise, and time misalignment. I
    corecore