7 research outputs found
Image steganography based on color palette transformation in color space
In this paper, we present a novel image steganography method which is based on color palette transformation in color space. Most of the existing image steganography methods modify separate image pixels, and random noise appears in the image. By proposing a method, which changes the color palette of the image (all pixels of the same color will be changed to the same color), we achieve a higher user perception. Presented comparison of stegoimage quality metrics with other image steganography methods proved the new method is one of the best according to Structural Similarity Index (SSIM) and Peak Signal Noise Ratio (PSNR) values. The capability is average among other methods, but our method has a bigger capacity among methods with better SSIM and PSNR values. The color and pixel capability can be increased by using standard or adaptive color palette images with smoothing, but it will increase the embedding identification possibilityThis article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made.The research had no specific funding and was implemented as a master thesis in Ć iauliai Univesity with the supervisor from Vilnius Gediminas Technical Universit
New watermarking methods for digital images.
The phenomenal spread of the Internet places an enormous demand on content-ownership-validation. In this thesis, four new image-watermarking methods are presented. One method is based on discrete-wavelet-transformation (DWT) only while the rest are based on DWT and singular-value-decomposition (SVD) ensemble. The main target for this thesis is to reach a new blind-watermarking-method. Method IV presents such watermark using QR-codes. The use of QR-codes in watermarking is novel. The choice of such application is based on the fact that QR-Codes have errors self-correction-capability of 5% or higher which satisfies the nature of digital-image-processing. Results show that the proposed-methods introduced minimal distortion to the watermarked images as compared to other methods and are robust against JPEG, resizing and other attacks. Moreover, watermarking-method-II provides a solution to the detection of false watermark in the literature. Finally, method IV presents a new QR-code guided watermarking-approach that can be used as a steganography as well. --Leaf ii.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b183575
Data Hiding and Its Applications
Data hiding techniques have been widely used to provide copyright protection, data integrity, covert communication, non-repudiation, and authentication, among other applications. In the context of the increased dissemination and distribution of multimedia content over the internet, data hiding methods, such as digital watermarking and steganography, are becoming increasingly relevant in providing multimedia security. The goal of this book is to focus on the improvement of data hiding algorithms and their different applications (both traditional and emerging), bringing together researchers and practitioners from different research fields, including data hiding, signal processing, cryptography, and information theory, among others
Recent Advances in Signal Processing
The signal processing task is a very critical issue in the majority of new technological inventions and challenges in a variety of applications in both science and engineering fields. Classical signal processing techniques have largely worked with mathematical models that are linear, local, stationary, and Gaussian. They have always favored closed-form tractability over real-world accuracy. These constraints were imposed by the lack of powerful computing tools. During the last few decades, signal processing theories, developments, and applications have matured rapidly and now include tools from many areas of mathematics, computer science, physics, and engineering. This book is targeted primarily toward both students and researchers who want to be exposed to a wide variety of signal processing techniques and algorithms. It includes 27 chapters that can be categorized into five different areas depending on the application at hand. These five categories are ordered to address image processing, speech processing, communication systems, time-series analysis, and educational packages respectively. The book has the advantage of providing a collection of applications that are completely independent and self-contained; thus, the interested reader can choose any chapter and skip to another without losing continuity
Schémas de tatouage d'images, schémas de tatouage conjoint à la compression, et schémas de dissimulation de données
In this manuscript we address data-hiding in images and videos. Specifically we address robust watermarking for images, robust watermarking jointly with compression, and finally non robust data-hiding.The first part of the manuscript deals with high-rate robust watermarking. After having briefly recalled the concept of informed watermarking, we study the two major watermarking families : trellis-based watermarking and quantized-based watermarking. We propose, firstly to reduce the computational complexity of the trellis-based watermarking, with a rotation based embedding, and secondly to introduce a trellis-based quantization in a watermarking system based on quantization.The second part of the manuscript addresses the problem of watermarking jointly with a JPEG2000 compression step or an H.264 compression step. The quantization step and the watermarking step are achieved simultaneously, so that these two steps do not fight against each other. Watermarking in JPEG2000 is achieved by using the trellis quantization from the part 2 of the standard. Watermarking in H.264 is performed on the fly, after the quantization stage, choosing the best prediction through the process of rate-distortion optimization. We also propose to integrate a Tardos code to build an application for traitors tracing.The last part of the manuscript describes the different mechanisms of color hiding in a grayscale image. We propose two approaches based on hiding a color palette in its index image. The first approach relies on the optimization of an energetic function to get a decomposition of the color image allowing an easy embedding. The second approach consists in quickly obtaining a color palette of larger size and then in embedding it in a reversible way.Dans ce manuscrit nous abordons lâinsertion de donnĂ©es dans les images et les vidĂ©os. Plus particuliĂšrement nous traitons du tatouage robuste dans les images, du tatouage robuste conjointement Ă la compression et enfin de lâinsertion de donnĂ©es (non robuste).La premiĂšre partie du manuscrit traite du tatouage robuste Ă haute capacitĂ©. AprĂšs avoir briĂšvement rappelĂ© le concept de tatouage informĂ©, nous Ă©tudions les deux principales familles de tatouage : le tatouage basĂ© treillis et le tatouage basĂ© quantification. Nous proposons dâune part de rĂ©duire la complexitĂ© calculatoire du tatouage basĂ© treillis par une approche dâinsertion par rotation, ainsi que dâautre part dâintroduire une approche par quantification basĂ©e treillis au seindâun systĂšme de tatouage basĂ© quantification.La deuxiĂšme partie du manuscrit aborde la problĂ©matique de tatouage conjointement Ă la phase de compression par JPEG2000 ou par H.264. LâidĂ©e consiste Ă faire en mĂȘme temps lâĂ©tape de quantification et lâĂ©tape de tatouage, de sorte que ces deux Ă©tapes ne « luttent pas » lâune contre lâautre. Le tatouage au sein de JPEG2000 est effectuĂ© en dĂ©tournant lâutilisation de la quantification basĂ©e treillis de la partie 2 du standard. Le tatouage au sein de H.264 est effectuĂ© Ă la volĂ©e, aprĂšs la phase de quantification, en choisissant la meilleure prĂ©diction via le processus dâoptimisation dĂ©bit-distorsion. Nous proposons Ă©galement dâintĂ©grer un code de Tardos pour construire une application de traçage de traĂźtres.La derniĂšre partie du manuscrit dĂ©crit les diffĂ©rents mĂ©canismes de dissimulation dâune information couleur au sein dâune image en niveaux de gris. Nous proposons deux approches reposant sur la dissimulation dâune palette couleur dans son image dâindex. La premiĂšre approche consiste Ă modĂ©liser le problĂšme puis Ă lâoptimiser afin dâavoir une bonne dĂ©composition de lâimage couleur ainsi quâune insertion aisĂ©e. La seconde approche consiste Ă obtenir, de maniĂšre rapide et sĂ»re, une palette de plus grande dimension puis Ă lâinsĂ©rer de maniĂšre rĂ©versible
ID Photograph hashing : a global approach
This thesis addresses the question of the authenticity of identity photographs, part of the documents required in controlled access. Since sophisticated means of reproduction are publicly available, new methods / techniques should prevent tampering and unauthorized reproduction of the photograph. This thesis proposes a hashing method for the authentication of the identity photographs, robust to print-and-scan. This study focuses also on the effects of digitization at hash level. The developed algorithm performs a dimension reduction, based on independent component analysis (ICA). In the learning stage, the subspace projection is obtained by applying ICA and then reduced according to an original entropic selection strategy. In the extraction stage, the coefficients obtained after projecting the identity image on the subspace are quantified and binarized to obtain the hash value. The study reveals the effects of the scanning noise on the hash values of the identity photographs and shows that the proposed method is robust to the print-and-scan attack. The approach focusing on robust hashing of a restricted class of images (identity) differs from classical approaches that address any imageCette thĂšse traite de la question de lâauthenticitĂ© des photographies dâidentitĂ©, partie intĂ©grante des documents nĂ©cessaires lors dâun contrĂŽle dâaccĂšs. Alors que les moyens de reproduction sophistiquĂ©s sont accessibles au grand public, de nouvelles mĂ©thodes / techniques doivent empĂȘcher toute falsification / reproduction non autorisĂ©e de la photographie dâidentitĂ©. Cette thĂšse propose une mĂ©thode de hachage pour lâauthentification de photographies dâidentitĂ©, robuste Ă lâimpression-lecture. Ce travail met ainsi lâaccent sur les effets de la numĂ©risation au niveau de hachage. Lâalgorithme mis au point procĂšde Ă une rĂ©duction de dimension, basĂ©e sur lâanalyse en composantes indĂ©pendantes (ICA). Dans la phase dâapprentissage, le sous-espace de projection est obtenu en appliquant lâICA puis rĂ©duit selon une stratĂ©gie de sĂ©lection entropique originale. Dans lâĂ©tape dâextraction, les coefficients obtenus aprĂšs projection de lâimage dâidentitĂ© sur le sous-espace sont quantifiĂ©s et binarisĂ©s pour obtenir la valeur de hachage. LâĂ©tude rĂ©vĂšle les effets du bruit de balayage intervenant lors de la numĂ©risation des photographies dâidentitĂ© sur les valeurs de hachage et montre que la mĂ©thode proposĂ©e est robuste Ă lâattaque dâimpression-lecture. Lâapproche suivie en se focalisant sur le hachage robuste dâune classe restreinte dâimages (dâidentitĂ©) se distingue des approches classiques qui adressent une image quelconqu
Framing digital image credibility: image manipulation problems, perceptions and solutions
Image manipulation is subverting the credibility of photographs
as a whole. Currently there is no practical solution for
asserting the authenticity of a photograph. People express their
concern about this when asked but continue to operate in a
âbusiness as usualâ fashion.
While a range of digital forensic technologies has been developed
to address falsification of digital photographs, such
technologies begin with âsourcelessâ images and conclude with
results in equivocal terms of probability, while not addressing
the meaning and content contained within the image.
It is interesting that there is extensive research into
computer-based image forgery detection, but very little research
into how we as humans perceive, or fail to perceive, these
forgeries when we view them. The survey, eye-gaze tracking
experiments and neural network analysis undertaken in this
research contribute to this limited pool of knowledge.
The research described in this thesis investigates human
perceptions of images that are manipulated and, by comparison,
images that are not manipulated. The data collected, and their
analyses, demonstrate that humans are poor at identifying that an
image has been manipulated. I consider some of the implications
of digital image manipulation, explore current approaches to
image credibility, and present a potential digital image
authentication framework that uses technology and tools that
exploit social factors such as reputation and trust to create a
framework for technologically packaging/wrapping images with
social assertions of authenticity, and surfaced metadata
information.
The thesis is organised into 6 chapters.
Chapter 1: Introduction
I briefly introduce the history of photography, highlighting its
importance as reportage, and discuss how it has changed from its
introduction in the early 19th century to today. I discuss photo
manipulation and consider how it has changed along with
photography. I describe the relevant literature on the subject of
image authentication and the use of eye gaze tracking and neural
nets in identifying the role of human vision in image
manipulation detection, and I describe my area of research within
this context.
Chapter 2: Literature review
I describe the various types of image manipulation, giving
examples, and then canvas the literature to describe the
landscape of image manipulation problems and extant solutions,
namely:
âą the nature of image manipulation,
âą investigations of human perceptions of image manipulation,
âą eye gaze tracking and manipulated images,
âą known efforts to create solutions to the problem of
preserving unadulterated photographic representations and the
meanings they hold.
Finally, I position my research activities within the context of
the literature.
Chapter 3: The research
I describe the survey and experiments I undertook to investigate
attitudes toward image manipulation, research human perceptions
of manipulated and unmanipulated images, and to trial elements of
a new wrapper-style file format that I call .msci (mobile
self-contained image), designed to address image authenticity
issues.
Methods, results and discussion for each element are presented in
both explanatory text and by presentation of papers resulting
from the experiments.
Chapter 4: Analysis of eye gaze data using classification neural
networks
I describe pattern classifying neural network analysis applied to
selected data obtained from the experiments and the insights this
analysis provided into the opaque realm of cognitive perception
as seen through the lens of eye gaze.
Chapter 5: Discussion
I synthesise and discuss the outcomes of the survey and
experiments.
I discuss the outcomes of this research, and consider the need
for a distinction between photographs and photo art. I offer a
theoretical formula within which the overall authenticity of an
image can be assessed. In addition I present a potential image
authentication framework built around the .msci file format,
designed in consideration of my investigation of the requirements
of the image manipulation problem space and the experimental work
undertaken in this research.
Chapter 6: Conclusions and future work
This thesis concludes with a summary of the outcomes of my
research, and I consider the need for future experimentation to
expand on the insights gained to date. I also note some ways
forward to develop an image authentication framework to address
the ongoing problem of image authenticity