12 research outputs found

    Towards Improved Steganalysis: When Cover Selection is Used in Steganography

    Get PDF
    This paper proposes an improved steganalytic method when cover selection is used in steganography. We observed that the covers selected by existing cover selection methods normally have different characteristics from normal ones, and propose a steganalytic method to capture such differences. As a result, the detection accuracy of steganalysis is increased. In our method, we consider a number of images collected from one or more target (suspected but not known) users, and use an unsupervised learning algorithm such as kk -means to adapt the performance of a pre-trained classifier towards the cover selection operation of the target user(s). The adaptation is done via pseudo-labels from the suspected images themselves, thus allowing the re-trained classifier more aligned with the cover selection operation of the target user(s). We give experimental results to show that our method can indeed help increase the detection accuracy, especially when the percentage of stego images is between 0.3 and 0.7

    On Foundations of Protecting Computations

    Get PDF
    Information technology systems have become indispensable to uphold our way of living, our economy and our safety. Failure of these systems can have devastating effects. Consequently, securing these systems against malicious intentions deserves our utmost attention. Cryptography provides the necessary foundations for that purpose. In particular, it provides a set of building blocks which allow to secure larger information systems. Furthermore, cryptography develops concepts and tech- niques towards realizing these building blocks. The protection of computations is one invaluable concept for cryptography which paves the way towards realizing a multitude of cryptographic tools. In this thesis, we contribute to this concept of protecting computations in several ways. Protecting computations of probabilistic programs. An indis- tinguishability obfuscator (IO) compiles (deterministic) code such that it becomes provably unintelligible. This can be viewed as the ultimate way to protect (deterministic) computations. Due to very recent research, such obfuscators enjoy plausible candidate constructions. In certain settings, however, it is necessary to protect probabilistic com- putations. The only known construction of an obfuscator for probabilistic programs is due to Canetti, Lin, Tessaro, and Vaikuntanathan, TCC, 2015 and requires an indistinguishability obfuscator which satisfies extreme security guarantees. We improve this construction and thereby reduce the require- ments on the security of the underlying indistinguishability obfuscator. (Agrikola, Couteau, and Hofheinz, PKC, 2020) Protecting computations in cryptographic groups. To facilitate the analysis of building blocks which are based on cryptographic groups, these groups are often overidealized such that computations in the group are protected from the outside. Using such overidealizations allows to prove building blocks secure which are sometimes beyond the reach of standard model techniques. However, these overidealizations are subject to certain impossibility results. Recently, Fuchsbauer, Kiltz, and Loss, CRYPTO, 2018 introduced the algebraic group model (AGM) as a relaxation which is closer to the standard model but in several aspects preserves the power of said overidealizations. However, their model still suffers from implausibilities. We develop a framework which allows to transport several security proofs from the AGM into the standard model, thereby evading the above implausi- bility results, and instantiate this framework using an indistinguishability obfuscator. (Agrikola, Hofheinz, and Kastner, EUROCRYPT, 2020) Protecting computations using compression. Perfect compression algorithms admit the property that the compressed distribution is truly random leaving no room for any further compression. This property is invaluable for several cryptographic applications such as “honey encryption” or password-authenticated key exchange. However, perfect compression algorithms only exist for a very small number of distributions. We relax the notion of compression and rigorously study the resulting notion which we call “pseudorandom encodings”. As a result, we identify various surprising connections between seemingly unrelated areas of cryptography. Particularly, we derive novel results for adaptively secure multi-party computation which allows for protecting computations in distributed settings. Furthermore, we instantiate the weakest version of pseudorandom encodings which suffices for adaptively secure multi-party computation using an indistinguishability obfuscator. (Agrikola, Couteau, Ishai, Jarecki, and Sahai, TCC, 2020

    Schémas de tatouage d'images, schémas de tatouage conjoint à la compression, et schémas de dissimulation de données

    Get PDF
    In this manuscript we address data-hiding in images and videos. Specifically we address robust watermarking for images, robust watermarking jointly with compression, and finally non robust data-hiding.The first part of the manuscript deals with high-rate robust watermarking. After having briefly recalled the concept of informed watermarking, we study the two major watermarking families : trellis-based watermarking and quantized-based watermarking. We propose, firstly to reduce the computational complexity of the trellis-based watermarking, with a rotation based embedding, and secondly to introduce a trellis-based quantization in a watermarking system based on quantization.The second part of the manuscript addresses the problem of watermarking jointly with a JPEG2000 compression step or an H.264 compression step. The quantization step and the watermarking step are achieved simultaneously, so that these two steps do not fight against each other. Watermarking in JPEG2000 is achieved by using the trellis quantization from the part 2 of the standard. Watermarking in H.264 is performed on the fly, after the quantization stage, choosing the best prediction through the process of rate-distortion optimization. We also propose to integrate a Tardos code to build an application for traitors tracing.The last part of the manuscript describes the different mechanisms of color hiding in a grayscale image. We propose two approaches based on hiding a color palette in its index image. The first approach relies on the optimization of an energetic function to get a decomposition of the color image allowing an easy embedding. The second approach consists in quickly obtaining a color palette of larger size and then in embedding it in a reversible way.Dans ce manuscrit nous abordons l’insertion de données dans les images et les vidéos. Plus particulièrement nous traitons du tatouage robuste dans les images, du tatouage robuste conjointement à la compression et enfin de l’insertion de données (non robuste).La première partie du manuscrit traite du tatouage robuste à haute capacité. Après avoir brièvement rappelé le concept de tatouage informé, nous étudions les deux principales familles de tatouage : le tatouage basé treillis et le tatouage basé quantification. Nous proposons d’une part de réduire la complexité calculatoire du tatouage basé treillis par une approche d’insertion par rotation, ainsi que d’autre part d’introduire une approche par quantification basée treillis au seind’un système de tatouage basé quantification.La deuxième partie du manuscrit aborde la problématique de tatouage conjointement à la phase de compression par JPEG2000 ou par H.264. L’idée consiste à faire en même temps l’étape de quantification et l’étape de tatouage, de sorte que ces deux étapes ne « luttent pas » l’une contre l’autre. Le tatouage au sein de JPEG2000 est effectué en détournant l’utilisation de la quantification basée treillis de la partie 2 du standard. Le tatouage au sein de H.264 est effectué à la volée, après la phase de quantification, en choisissant la meilleure prédiction via le processus d’optimisation débit-distorsion. Nous proposons également d’intégrer un code de Tardos pour construire une application de traçage de traîtres.La dernière partie du manuscrit décrit les différents mécanismes de dissimulation d’une information couleur au sein d’une image en niveaux de gris. Nous proposons deux approches reposant sur la dissimulation d’une palette couleur dans son image d’index. La première approche consiste à modéliser le problème puis à l’optimiser afin d’avoir une bonne décomposition de l’image couleur ainsi qu’une insertion aisée. La seconde approche consiste à obtenir, de manière rapide et sûre, une palette de plus grande dimension puis à l’insérer de manière réversible

    Tatouage du flux compressé MPEG-4 AVC

    Get PDF
    La présente thèse aborde le sujet de tatouage du flux MPEG-4 AVC sur ses deux volets théoriques et applicatifs en considérant deux domaines applicatifs à savoir la protection du droit d auteur et la vérification de l'intégrité du contenu. Du point de vue théorique, le principal enjeu est de développer un cadre de tatouage unitaire en mesure de servir les deux applications mentionnées ci-dessus. Du point de vue méthodologique, le défi consiste à instancier ce cadre théorique pour servir les applications visées. La première contribution principale consiste à définir un cadre théorique pour le tatouage multi symboles à base de modulation d index de quantification (m-QIM). La règle d insertion QIM a été généralisée du cas binaire au cas multi-symboles et la règle de détection optimale (minimisant la probabilité d erreur à la détection en condition du bruit blanc, additif et gaussien) a été établie. Il est ainsi démontré que la quantité d information insérée peut être augmentée par un facteur de log2m tout en gardant les mêmes contraintes de robustesse et de transparence. Une quantité d information de 150 bits par minutes, soit environ 20 fois plus grande que la limite imposée par la norme DCI est obtenue. La deuxième contribution consiste à spécifier une opération de prétraitement qui permet d éliminer les impactes du phénomène du drift (propagation de la distorsion) dans le flux compressé MPEG-4 AVC. D abord, le problème a été formalisé algébriquement en se basant sur les expressions analytiques des opérations d encodage. Ensuite, le problème a été résolu sous la contrainte de prévention du drift. Une amélioration de la transparence avec des gains de 2 dB en PSNR est obtenueThe present thesis addresses the MPEG-4 AVC stream watermarking and considers two theoretical and applicative challenges, namely ownership protection and content integrity verification.From the theoretical point of view, the thesis main challenge is to develop a unitary watermarking framework (insertion/detection) able to serve the two above mentioned applications in the compressed domain. From the methodological point of view, the challenge is to instantiate this theoretical framework for serving the targeted applications. The thesis first main contribution consists in building the theoretical framework for the multi symbol watermarking based on quantization index modulation (m-QIM). The insertion rule is analytically designed by extending the binary QIM rule. The detection rule is optimized so as to ensure minimal probability of error under additive white Gaussian noise distributed attacks. It is thus demonstrated that the data payload can be increased by a factor of log2m, for prescribed transparency and additive Gaussian noise power. A data payload of 150 bits per minute, i.e. about 20 times larger than the limit imposed by the DCI standard, is obtained. The thesis second main theoretical contribution consists in specifying a preprocessing MPEG-4 AVC shaping operation which can eliminate the intra-frame drift effect. The drift represents the distortion spread in the compressed stream related to the MPEG encoding paradigm. In this respect, the drift distortion propagation problem in MPEG-4 AVC is algebraically expressed and the corresponding equations system is solved under drift-free constraints. The drift-free shaping results in gain in transparency of 2 dB in PSNREVRY-INT (912282302) / SudocSudocFranceF

    Proceedings of the 10th Japanese-Hungarian Symposium on Discrete Mathematics and Its Applications

    Get PDF

    Actas de la XIII Reunión Española sobre Criptología y Seguridad de la Información RECSI XIII : Alicante, 2-5 de septiembre de 2014

    Get PDF
    Si tuviéramos que elegir un conjunto de palabras clave para definir la sociedad actual, sin duda el término información sería uno de los más representativos. Vivimos en un mundo caracterizado por un continuo flujo de información en el que las Tecnologías de la Información y Comunicación (TIC) y las Redes Sociales desempeñan un papel relevante. En la Sociedad de la Información se generan gran variedad de datos en formato digital, siendo la protección de los mismos frente a accesos y usos no autorizados el objetivo principal de lo que conocemos como Seguridad de la Información. Si bien la Criptología es una herramienta tecnológica básica, dedicada al desarrollo y análisis de sistemas y protocolos que garanticen la seguridad de los datos, el espectro de tecnologías que intervienen en la protección de la información es amplio y abarca diferentes disciplinas. Una de las características de esta ciencia es su rápida y constante evolución, motivada en parte por los continuos avances que se producen en el terreno de la computación, especialmente en las últimas décadas. Sistemas, protocolos y herramientas en general considerados seguros en la actualidad dejarán de serlo en un futuro más o menos cercano, lo que hace imprescindible el desarrollo de nuevas herramientas que garanticen, de forma eficiente, los necesarios niveles de seguridad. La Reunión Española sobre Criptología y Seguridad de la Información (RECSI) es el congreso científico español de referencia en el ámbito de la Criptología y la Seguridad en las TIC, en el que se dan cita periódicamente los principales investigadores españoles y de otras nacionalidades en esta disciplina, con el fin de compartir los resultados más recientes de su investigación. Del 2 al 5 de septiembre de 2014 se celebrará la decimotercera edición en la ciudad de Alicante, organizada por el grupo de Criptología y Seguridad Computacional de la Universidad de Alicante. Las anteriores ediciones tuvieron lugar en Palma de Mallorca (1991), Madrid (1992), Barcelona (1994), Valladolid (1996), Torremolinos (1998), Santa Cruz de Tenerife (2000), Oviedo (2002), Leganés (2004), Barcelona (2006), Salamanca (2008), Tarragona (2010) y San Sebastián (2012)

    WICC 2016 : XVIII Workshop de Investigadores en Ciencias de la Computación

    Get PDF
    Actas del XVIII Workshop de Investigadores en Ciencias de la Computación (WICC 2016), realizado en la Universidad Nacional de Entre Ríos, el 14 y 15 de abril de 2016.Red de Universidades con Carreras en Informática (RedUNCI

    XXI Workshop de Investigadores en Ciencias de la Computación - WICC 2019: libro de actas

    Get PDF
    Trabajos presentados en el XXI Workshop de Investigadores en Ciencias de la Computación (WICC), celebrado en la provincia de San Juan los días 25 y 26 de abril 2019, organizado por la Red de Universidades con Carreras en Informática (RedUNCI) y la Facultad de Ciencias Exactas, Físicas y Naturales de la Universidad Nacional de San Juan.Red de Universidades con Carreras en Informátic
    corecore