110 research outputs found
Advances in Syndrome Coding based on Stochastic and Deterministic Matrices for Steganography
Steganographie ist die Kunst der vertraulichen Kommunikation. Anders als in der Kryptographie, wo der Austausch vertraulicher Daten für Dritte offensichtlich ist, werden die vertraulichen Daten in einem steganographischen System in andere, unauffällige Coverdaten (z.B. Bilder) eingebettet und so an den Empfänger übertragen.
Ziel eines steganographischen Algorithmus ist es, die Coverdaten nur geringfügig zu ändern, um deren statistische Merkmale zu erhalten, und möglichst in unauffälligen Teilen des Covers einzubetten. Um dieses Ziel zu erreichen, werden verschiedene Ansätze der so genannten minimum-embedding-impact Steganographie basierend auf Syndromkodierung vorgestellt. Es wird dabei zwischen Ansätzen basierend auf stochastischen und auf deterministischen Matrizen unterschieden. Anschließend werden die Algorithmen bewertet, um Vorteile der Anwendung von Syndromkodierung herauszustellen
Adaptive 3D Mesh Steganography Based on Feature-Preserving Distortion
3D mesh steganographic algorithms based on geometric modification are
vulnerable to 3D steganalyzers. In this paper, we propose a highly adaptive 3D
mesh steganography based on feature-preserving distortion (FPD), which
guarantees high embedding capacity while effectively resisting 3D steganalysis.
Specifically, we first transform vertex coordinates into integers and derive
bitplanes from them to construct the embedding domain. To better measure the
mesh distortion caused by message embedding, we propose FPD based on the most
effective sub-features of the state-of-the-art steganalytic feature set. By
improving and minimizing FPD, we can efficiently calculate the optimal
vertex-changing distribution and simultaneously preserve mesh features, such as
steganalytic and geometric features, to a certain extent. By virtue of the
optimal distribution, we adopt the Q-layered syndrome trellis coding (STC) for
practical message embedding. However, when Q varies, calculating bit
modification probability (BMP) in each layer of Q-layered will be cumbersome.
Hence, we contrapuntally design a universal and automatic BMP calculation
approach. Extensive experimental results demonstrate that the proposed
algorithm outperforms most state-of-the-art 3D mesh steganographic algorithms
in terms of resisting 3D steganalysis.Comment: IEEE TVCG major revisio
Steganographer Identification
Conventional steganalysis detects the presence of steganography within single
objects. In the real-world, we may face a complex scenario that one or some of
multiple users called actors are guilty of using steganography, which is
typically defined as the Steganographer Identification Problem (SIP). One might
use the conventional steganalysis algorithms to separate stego objects from
cover objects and then identify the guilty actors. However, the guilty actors
may be lost due to a number of false alarms. To deal with the SIP, most of the
state-of-the-arts use unsupervised learning based approaches. In their
solutions, each actor holds multiple digital objects, from which a set of
feature vectors can be extracted. The well-defined distances between these
feature sets are determined to measure the similarity between the corresponding
actors. By applying clustering or outlier detection, the most suspicious
actor(s) will be judged as the steganographer(s). Though the SIP needs further
study, the existing works have good ability to identify the steganographer(s)
when non-adaptive steganographic embedding was applied. In this chapter, we
will present foundational concepts and review advanced methodologies in SIP.
This chapter is self-contained and intended as a tutorial introducing the SIP
in the context of media steganography.Comment: A tutorial with 30 page
Быстрое вычисление циклических сверток и их приложения в кодовых схемах асимметричного шифрования
The development of fast algorithms for key generation, encryption and decryption not only increases the efficiency of related operations. Such fast algorithms, for example, for asymmetric cryptosystems on quasi-cyclic codes, make it possible to experimentally study the dependence of decoding failure rate on code parameters for small security levels and to extrapolate these results to large values of security levels. In this article, we explore efficient cyclic convolution algorithms, specifically designed, among other things, for use in encoding and decoding algorithms for quasi-cyclic LDPC and MDPC codes. Corresponding convolutions operate on binary vectors, which can be either sparse or dense. The proposed algorithms achieve high speed by compactly storing sparse vectors, using hardware-supported XOR instructions, and replacing modulo operations with specialized loop transformations. These fast algorithms have potential applications not only in cryptography, but also in other areas where convolutions are used.Разработка быстрых алгоритмов генерации ключей, шифрования и дешифрования не только повышает эффективность соответствующих операций. Такие быстрые алгоритмы, например, для асимметричных криптосистем на квазициклических кодах, позволяют экспериментально исследовать зависимость вероятности ошибочного расшифрования от параметров кода для малых параметров безопасности и экстраполировать эти результаты на большие значения параметров безопасности. В этой статье мы исследуем эффективные алгоритмы циклической свертки, специально разработанные, в том числе, для использования в алгоритмах кодирования и декодирования квазициклических LDPC и MDPC кодов. Соответствующие свертки работают с двоичными векторами, которые могут быть как разреженными, так и плотными. Предлагаемые алгоритмы достигают высокой скорости за счет компактного хранения разреженных векторов, использования аппаратно поддерживаемых инструкций XOR и замены операций по модулю специализированными преобразованиями цикла. Эти быстрые алгоритмы имеют потенциальное применение не только в криптографии, но и в других областях, где используются свертки
Machine learning based digital image forensics and steganalysis
The security and trustworthiness of digital images have become crucial issues due to the simplicity of malicious processing. Therefore, the research on image steganalysis (determining if a given image has secret information hidden inside) and image forensics (determining the origin and authenticity of a given image and revealing the processing history the image has gone through) has become crucial to the digital society.
In this dissertation, the steganalysis and forensics of digital images are treated as pattern classification problems so as to make advanced machine learning (ML) methods applicable. Three topics are covered: (1) architectural design of convolutional neural networks (CNNs) for steganalysis, (2) statistical feature extraction for camera model classification, and (3) real-world tampering detection and localization.
For covert communications, steganography is used to embed secret messages into images by altering pixel values slightly. Since advanced steganography alters the pixel values in the image regions that are hard to be detected, the traditional ML-based steganalytic methods heavily relied on sophisticated manual feature design have been pushed to the limit. To overcome this difficulty, in-depth studies are conducted and reported in this dissertation so as to move the success achieved by the CNNs in computer vision to steganalysis. The outcomes achieved and reported in this dissertation are: (1) a proposed CNN architecture incorporating the domain knowledge of steganography and steganalysis, and (2) ensemble methods of the CNNs for steganalysis. The proposed CNN is currently one of the best classifiers against steganography.
Camera model classification from images aims at assigning a given image to its source capturing camera model based on the statistics of image pixel values. For this, two types of statistical features are designed to capture the traces left by in-camera image processing algorithms. The first is Markov transition probabilities modeling block-DCT coefficients for JPEG images; the second is based on histograms of local binary patterns obtained in both the spatial and wavelet domains. The designed features serve as the input to train support vector machines, which have the best classification performance at the time the features are proposed.
The last part of this dissertation documents the solutions delivered by the author’s team to The First Image Forensics Challenge organized by the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. In the competition, all the fake images involved were doctored by popular image-editing software to simulate the real-world scenario of tampering detection (determine if a given image has been tampered or not) and localization (determine which pixels have been tampered). In Phase-1 of the Challenge, advanced steganalysis features were successfully migrated to tampering detection. In Phase-2 of the Challenge, an efficient copy-move detector equipped with PatchMatch as a fast approximate nearest neighbor searching method were developed to identify duplicated regions within images. With these tools, the author’s team won the runner-up prizes in both the two phases of the Challenge
A contribution to the theory of convolutional codes from systems theory piont of view
Information is such a valuable good of our time. Given that the transmission of information has always been subject to precision problems, knowing the obstacles existing between the transmitter and the receiver, eventual disruptions can happen anywhere in between, the physical means, channels involved with the exchange are never perfect and they are subject to errors that might result in loss of important data. Error correcting codes are a key element in the transmission and storage of digital information.
In this thesis we study the possibility to redefine and improve properties of convolutional codes in terms of coding and decoding, with the help of the systems and control theory.
For that matter, in chapter 1, we recall notions on coding theory, more specifically, on linear codes, both block and convolutional, redefining the convolutional codes as submodules of the F^n_{q} which is our main workspace. And we go through the prerequisites involved in the process of encoding and decoding, both for block and convolutional codes.
And in order to approach them with tools of the systems theory, in chapter 2, we give the equivalence of the generating matrix in the form of a realization (A,B,C,D) of an input-output system. Then, we studied the concatenation because it has been proved to improve the transmission. In this work, we consider two big families of concatenation: serial concatenation, and parallel concatenation and two other models of concatenation called systematic serial concatenation and parallel interleaver concatenation.
In chapter 3, we study control properties for each case. Nevertheless, we focus on the property of output-observability, and conditions to obtain it, particularly an easy iterative test is presented in order to discuss whether a code is output-observable. This test consists in calculating certain ranks of block matrices constructed from the matrices A, B, C, D. The output-observability property is very beneficial for the decoding as discussed in the next chapter.
Moreover, in chapter 4, we assess two methods for a complete decoding operating on an iterative fashion, then suggest conditions for a step by step decoding in a case of concatenation, in order to recover exactly each and every original sequence after operation of every implied code. Following this concept, we study the convolutional decoding in general, and in particular the one of concatenated models in serial, in parallel, in systematic serial and finally in interleaver parallel implementation.
In chapter 5, we suggest an application in steganography, in which we implement a steganographic scheme, inspired by the linear system representation of convolutional codes. Having the output-observability matrix being the backbone behind the construction of our decoding algorithms, coupled with the syndrome method, we formed some embedding/retrieval algorithms inspired by that construction. Those methods display the protection of communication within time-related transfer of information, with interesting possibilities and results.
Finally, a chapter summarizing all our achievements and a short list of possible future lines of work upon aspects that we would like to continue studying in order to achieve new related goals.La información es un valioso bien de nuestro tiempo. Dado que la transmisión de la información siempre ha estado sujeta a problemas de precisión, conociendo los obstáculos existentes entre el transmisor y el receptor, las interrupciones eventuales pueden ocurrir en cualquier lugar en el medio, el medio físico, canal involucrado con el cambio nunca es perfecto y está sujeto a errores que podrán dar como resultado una pérdida de datos importantes.
Dado que los códigos correctores de errores son un elemento clave en la transmisión y almacenamiento de información digital, por eso un más fácil y mejor uso abre interesantes oportunidades en la regulación de la transmisión de la información, el cual es una ventaja que ofrece la teoría de sistemas lineales y el álgebra lineal a la definición de los códigos de convolución. Esta es la razón por la que en esta tesis, seguimos esa perspectiva para estudiar la posibilidad de redefinir y mejorar las propiedades de los códigos de convolución en base a la codificación y descodificación, con la ayuda de los sistemas y la teoría de control.
En este sentido, en el capítulo 1, recordamos nociones sobre la teoría de códigos, más específicamente, sobre los códigos lineales, tanto de bloques como de convolución, se redefinen los códigos convolucionales como submódulos de Fnq que es nuestro espacio principal de trabajo. Y damos un repaso a los requisitos previos necesarios en el proceso de codificación y descodificación, tanto para los códigos de bloque como los códigos convolucionales. Y con el fin de aproximarnos a los códigos convolucionales con las herramientas de la teoría de sistemas, en el capítulo 2, damos la equivalencia de la matriz generatriz en función de una realización (A;B;C;D) de un sistema de entrada-salida. A continuación, se estudia la concatenación porque es conocido que mejora la transmisión. En este trabajo, se consideran dos grandes familias de concatenación: la concatenación en serie, y la concatenación en paralelo así como otros dos modelos de concatenación llamados concatenación en serie sistemática y la concatenación en paralelo con intercalador.
En el capítulo 3, estudiamos propiedades de control para cada caso. Sin embargo, nos hemos centrado en la propiedad de “funcional output-controlabilidad" que en lenguaje de teoría de códigos es conocido como “output-observabilidad", y en obtener condiciones que aseguren dicha condición, en particular se presenta un fácil test iterativo, que permite discutir cuando un código de convolución es output-observable. Este test consiste en calcular los rangos de ciertas matrices por bloques construidas a partir de las matrices A, B, C, D. La propiedad de output-observabilidad es muy útil para la descodificación que se estudia en el próximo capítulo.
Por otra parte, en el capítulo 4, se presentan dos métodos para una completa descodificación operando de forma iterativa, a partir de ahí, se sugieren
condiciones para paso a paso descodificar la concatenación, a fin de recuperar exactamente todos y cada uno de los códigos implicados en la operación. Siguiendo esta idea, se estudia la descodificación de los códigos convolucionales en general, y en particular la de los modelos concatenados en serie, en paralelo, en serie sistemática y finalmente la concatenación en paralelo con intercalador.
En el capítulo 5, se presenta una aplicación a la esteganografía, en el que se implementa un esquema esteganográfico, inspirado en la representación del
sistema lineal de códigos convolucionales. La matriz de output-observabilidad es la columna vertebral que está detrás de la construcción de nuestros algoritmos de descodificación que junto con el método de síndrome, formamos algunos algoritmos Inclusión/recuperación inspirados en esa construcción. Estos métodos muestran la protección de la comunicación dentro de la transferencia relacionada con el tiempo que dura la información, con interesantes posibilidades y resultados.
Por último, un capítulo que resume todos nuestros logros, en este caso el desarrollo de un nuevo algoritmo para escribir una realización, los métodos algoritmos para resolver la descodificación de códigos convolucionales. Esta aplicación a los códigos convolucionales de la teoría de sistemas lineales muestra un abanico de oportunidades para explorar, ya que como una aplicación adicional, hemos desarrollado algunos nuevos modelos esteganográficos, basados en la representación de los códigos convolucionales usando la teoría de sistemas lineal, y una corta lista de posibles futuras líneas de trabajo en los aspectos que nos gustaría seguir estudiando para alcanzar nuevas metas relacionadas seguir estudiando para alcanzar nuevas metas relacionadas con este tema.L'information est un bien de notre époque dont l'importance n'est plus à démontrer. Etant donné que la transmission de l'information a toujours été soumise à des problèmes de précisions, dûs aux obstacles existant entre le transmetteur and le récepteur, d'éventuelles perturbations peuvent arriver n'importe où, entre les canaux physiques, faisant partie du processus d'échange qui n'est jamais parfait et ils peuvent toujours être affectés par des erreurs créant d'importantes pertes d'information. Les codes correcteurs d'erreurs sont un élément clé dans la transmission et la conservation de l'information numérique.
Etant donné que les codes correcteurs d'erreurs sont un élément clé dans la transmission et la conservation de l'information digitale, ainsi un meilleur et plus simple usage ouvre des opportunités plus intéressantes dans la régulation de la transmission de l'information, qui est l'avantage que la définition des codes convolutifs suivant la théorie des systèmes linéaires apporte, avec le matériel de l'algèbre linéaire. C'est pour cette raison que dans cette thèse, nous suivons cette perspective pour étudier la perspective d'étudier la possibilité de redéfinir et d'améliorer les propriétés des codes convolutifs en termes de codage et de décodage, grâce aux outils de la théorie des systèmes et de contrôle.
A cet effet, dans le chapitre 1, nous rappelons des notions sur la théorie des codes linéaires, les codes en bloc ainsi que les codes convolutifs, redéfinissant les codes convolutifs comme des sous-modules de Fnq qui est notre principal espace de travail. Et c'est ainsi que nous invoquons tous les prérequis nécessaires pour le processus de codage et de décodage, pour ce qui est des codes en bloc, et des codes convolutifs.
Et dans le but d'approcher ces derniers grâce aux outils de la théorie des systèmes, dans le chapitre 2, nous donnons l'équivalence de la matrice génératrice sous la forme d'une réalisation (A;B;C;D) d'un un système inputoutput. Ensuite, nous étudions la concaténation parce qu'elle a été prouvée d'améliorer la transmission. Pour cette partie, nous considérons deux grandes familles de concaténation: concaténation en série et en parallèle, ainsi que deux autres modèles de concaténation appelés: concaténation systématique en série et concaténation en parallèle avec interleaver.
Dans le chapitre 3, nous étudions les propriétés de contrôle pour chacun des cas. Néanmoins, nous nous concentrons sur la propriété de "functional
output controllability" que dans le langage de théorie est appelé "outputobservability", et sur les conditions pour l'obtenir, en particulier un test itératif relativement facile a été présente en vue de discerner les codes output-observables de ceux qui ne le sont pas. Ce test permet de calculer certains rangs de blocs de matrices construits à partir des matrices A, B, C, D. La propriété d'output-observabilité est très bénéfique pour le décodage comme explicite dans le prochain chapitre.
De plus, dans le chapitre 4, nous évaluons deux méthodes pour un décodage complet opérant de manière itérative, ensuite suggérons des conditions pour un décodage étape par étape dans un cas de concaténation, en vue de récupérer exactement chacune des séquences d'origine après opération de chacun des codes impliqués. Suivant ce concept, nous _étudions le décodage convolutif en général et en particulier celui des modèles de concaténation en série, en parallèle, en série systématique et finalement en parallèle avec interleaver.
Dans le chapitre 5, nous suggérons une application en sténographie, dans laquelle nous implémentons un schéma sténographique, inspiré par la représentation en termes de systèmes linéaires des codes convolutifs. Ayant la matrice d'output-observabilité étant la matrice de référence pour la construction de nos algorithmes de décodage, couplée avec la méthode du syndrome, nous avons proposé quelques algorithmes d'encapsulation et de recouvrement inspirés par cette construction. Ces méthodes montrent la protection de la communication lors des transferts d'information dépendant du temps, avec d'intéressantes possibilités ainsi que des résultats encourageants.
Finalement, un chapitre résumant tout ce que nous avons accompli, en l'occurrence la mise sur pied d'un nouvel algorithme pour écrire une réalisation, méthodes et algorithmes pour résoudre le décodage des codes convolutifs. Cette application des systèmes linéaires sur la théorie des codes convolutifs
montre un ensemble de possibilités pour nous à explorer, puisque nous avons développé une application de plus, nous avons développé quelques modèles sténographiques, basés sur la représentation des codes convolutifs grâce à la théorie des systèmes linéaires, et une courte liste des futurs possibles axes de travail sur des aspects que nous souhaiterions étudier pour parachever nos buts traitant de problématiques similaire
Recommended from our members
Secure digital documents using Steganography and QR Code
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonWith the increasing use of the Internet several problems have arisen regarding the processing of electronic documents. These include content filtering, content retrieval/search. Moreover, document security has taken a centre stage including copyright protection, broadcast monitoring etc. There is an acute need of an effective tool which can find the identity, location and the time when the document was created so that it can be determined whether or not the contents of the document were tampered with after creation. Owing the sensitivity of the large amounts of data which is processed on a daily basis, verifying the authenticity and integrity of a document is more important now than it ever was. Unsurprisingly document authenticity verification has become the centre of attention in the world of research. Consequently, this research is concerned with creating a tool which deals with the above problem. This research proposes the use of a Quick Response Code as a message carrier for Text Key-print. The Text Key-print is a novel method which employs the basic element of the language (i.e. Characters of the alphabet) in order to achieve authenticity of electronic documents through the transformation of its physical structure into a logical structured relationship. The resultant dimensional matrix is then converted into a binary stream and encapsulated with a serial number or URL inside a Quick response Code (QR code) to form a digital fingerprint mark. For hiding a QR code, two image steganography techniques were developed based upon the spatial and the transform domains. In the spatial domain, three methods were proposed and implemented based on the least significant bit insertion technique and the use of pseudorandom number generator to scatter the message into a set of arbitrary pixels. These methods utilise the three colour channels in the images based on the RGB model based in order to embed one, two or three bits per the eight bit channel which results in three different hiding capacities. The second technique is an adaptive approach in transforming domain where a threshold value is calculated under a predefined location for embedding in order to identify the embedding strength of the embedding technique. The quality of the generated stego images was evaluated using both objective (PSNR) and Subjective (DSCQS) methods to ensure the reliability of our proposed methods. The experimental results revealed that PSNR is not a strong indicator of the perceived stego image quality, but not a bad interpreter also of the actual quality of stego images. Since the visual difference between the cover and the stego image must be absolutely imperceptible to the human visual system, it was logically convenient to ask human observers with different qualifications and experience in the field of image processing to evaluate the perceived quality of the cover and the stego image. Thus, the subjective responses were analysed using statistical measurements to describe the distribution of the scores given by the assessors. Thus, the proposed scheme presents an alternative approach to protect digital documents rather than the traditional techniques of digital signature and watermarking
- …