206 research outputs found
A Two-Codebook Combination and Three-Phase Block Matching Based Image Hiding Scheme with High Embedding-Capacity
[[abstract]]Image hiding is a technique that embeds the important images into a cover image such that the important images are imperceptible and can be securely transmitted to the receiver. In such research, the common goals are to enlarge the embedding capacity as much as possible since the visual quality of the cover image is degraded slightly and to keep high visual quality of the important images when they are extracted from the stego image. In this paper, we propose an image-hiding method based on the two-codebook combination, the three-phase block matching procedure, and the modulus substitution. The proposed method can achieve these benefits: (1) multiple, relatively large important images can be embedded into a relatively small cover image; (2) the quality of the stego image after embedding the secret data is not distorted significantly; (3) the important images have an acceptable visual quality after they are extracted. The experimental results also show that the proposed method is more flexible than previous methods
Digital watermark technology in security applications
With the rising emphasis on security and the number of fraud related crimes
around the world, authorities are looking for new technologies to tighten
security of identity. Among many modern electronic technologies, digital
watermarking has unique advantages to enhance the document authenticity.
At the current status of the development, digital watermarking technologies
are not as matured as other competing technologies to support identity authentication
systems. This work presents improvements in performance of
two classes of digital watermarking techniques and investigates the issue of
watermark synchronisation.
Optimal performance can be obtained if the spreading sequences are designed
to be orthogonal to the cover vector. In this thesis, two classes of
orthogonalisation methods that generate binary sequences quasi-orthogonal
to the cover vector are presented. One method, namely "Sorting and Cancelling"
generates sequences that have a high level of orthogonality to the
cover vector. The Hadamard Matrix based orthogonalisation method, namely
"Hadamard Matrix Search" is able to realise overlapped embedding, thus the
watermarking capacity and image fidelity can be improved compared to using
short watermark sequences. The results are compared with traditional
pseudo-randomly generated binary sequences. The advantages of both classes
of orthogonalisation inethods are significant.
Another watermarking method that is introduced in the thesis is based
on writing-on-dirty-paper theory. The method is presented with biorthogonal
codes that have the best robustness. The advantage and trade-offs of
using biorthogonal codes with this watermark coding methods are analysed
comprehensively. The comparisons between orthogonal and non-orthogonal
codes that are used in this watermarking method are also made. It is found
that fidelity and robustness are contradictory and it is not possible to optimise
them simultaneously.
Comparisons are also made between all proposed methods. The comparisons
are focused on three major performance criteria, fidelity, capacity and
robustness. aom two different viewpoints, conclusions are not the same. For
fidelity-centric viewpoint, the dirty-paper coding methods using biorthogonal
codes has very strong advantage to preserve image fidelity and the advantage
of capacity performance is also significant. However, from the power
ratio point of view, the orthogonalisation methods demonstrate significant
advantage on capacity and robustness. The conclusions are contradictory
but together, they summarise the performance generated by different design
considerations.
The synchronisation of watermark is firstly provided by high contrast
frames around the watermarked image. The edge detection filters are used
to detect the high contrast borders of the captured image. By scanning
the pixels from the border to the centre, the locations of detected edges
are stored. The optimal linear regression algorithm is used to estimate the
watermarked image frames. Estimation of the regression function provides
rotation angle as the slope of the rotated frames. The scaling is corrected by
re-sampling the upright image to the original size. A theoretically studied
method that is able to synchronise captured image to sub-pixel level accuracy
is also presented. By using invariant transforms and the "symmetric
phase only matched filter" the captured image can be corrected accurately
to original geometric size. The method uses repeating watermarks to form an
array in the spatial domain of the watermarked image and the the array that
the locations of its elements can reveal information of rotation, translation
and scaling with two filtering processes
Oblivious data hiding : a practical approach
This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed:
1. Theoretical and practical aspects of embedder-detector design.
2. Performance evaluation, and analysis of performance vs. complexity tradeoffs.
3. Some application specific implementations.
A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure.
The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work
Application of Stochastic Diffusion for Hiding High Fidelity Encrypted Images
Cryptography coupled with information hiding has received increased attention in recent years and has become a major research theme because of the importance of protecting encrypted information in any Electronic Data Interchange system in a way that is both discrete and covert. One of the essential limitations in any cryptography system is that the encrypted data provides an indication on its importance which arouses suspicion and makes it vulnerable to attack. Information hiding of Steganography provides a potential solution to this issue by making the data imperceptible, the security of the hidden information being a threat only if its existence is detected through Steganalysis. This paper focuses on a study methods for hiding encrypted information, specifically, methods that encrypt data before embedding in host data where the ‘data’ is in the form of a full colour digital image. Such methods provide a greater level of data security especially when the information is to be submitted over the Internet, for example, since a potential attacker needs to first detect, then extract and then decrypt the embedded data in order to recover the original information.
After providing an extensive survey of the current methods available, we present a new method of encrypting and then hiding full colour images in three full colour host images with out loss of fidelity following data extraction and decryption. The application of this technique, which is based on a technique called ‘Stochastic Diffusion’ are wide ranging and include covert image information interchange, digital image authentication, video authentication, copyright protection and digital rights management of image data in general
Authentication with Distortion Criteria
In a variety of applications, there is a need to authenticate content that
has experienced legitimate editing in addition to potential tampering attacks.
We develop one formulation of this problem based on a strict notion of
security, and characterize and interpret the associated information-theoretic
performance limits. The results can be viewed as a natural generalization of
classical approaches to traditional authentication. Additional insights into
the structure of such systems and their behavior are obtained by further
specializing the results to Bernoulli and Gaussian cases. The associated
systems are shown to be substantially better in terms of performance and/or
security than commonly advocated approaches based on data hiding and digital
watermarking. Finally, the formulation is extended to obtain efficient layered
authentication system constructions.Comment: 22 pages, 10 figure
Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods
This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read
Content-Aware Quantization Index Modulation:Leveraging Data Statistics for Enhanced Image Watermarking
Image watermarking techniques have continuously evolved to address new
challenges and incorporate advanced features. The advent of data-driven
approaches has enabled the processing and analysis of large volumes of data,
extracting valuable insights and patterns. In this paper, we propose two
content-aware quantization index modulation (QIM) algorithms: Content-Aware QIM
(CA-QIM) and Content-Aware Minimum Distortion QIM (CAMD-QIM). These algorithms
aim to improve the embedding distortion of QIM-based watermarking schemes by
considering the statistics of the cover signal vectors and messages. CA-QIM
introduces a canonical labeling approach, where the closest coset to each cover
vector is determined during the embedding process. An adjacency matrix is
constructed to capture the relationships between the cover vectors and
messages. CAMD-QIM extends the concept of minimum distortion (MD) principle to
content-aware QIM. Instead of quantizing the carriers to lattice points,
CAMD-QIM quantizes them to close points in the correct decoding region.
Canonical labeling is also employed in CAMD-QIM to enhance its performance.
Simulation results demonstrate the effectiveness of CA-QIM and CAMD-QIM in
reducing embedding distortion compared to traditional QIM. The combination of
canonical labeling and the minimum distortion principle proves to be powerful,
minimizing the need for changes to most cover vectors/carriers. These
content-aware QIM algorithms provide improved performance and robustness for
watermarking applications.Comment: 12 pages, 10 figure
The Improvement of Automatic Skin Cancer Detection Algorithm Based on CVQ technique
Nowadays, by increasing the number of deaths related to skin cancer, this kind of cancer has been converted as one of the important issues in humans' life. However, the main key is early detection of skin cancer in order to save the life of people. By considering this fact that there is a near similarity between cancer moles and normal ones, attention to artificial systems with the ability of distinguishing between these kinds of moles can be very important, undoubtedly. The accuracy of this kind of system must be considered in order to find better results, especially in the cases which are related to human‘s life. In this paper, with regard to the fact that the raising of a kind of skin cancer, Melanoma, has increasing, we have employed neural networks in the aim of function improvement of an approach based on compressed image technique, namely, Classified Vector Quantization (CVQ) technique. This suggested method has been examined on some images and the results show that this method is a proper way in order to automatic skin cancer detection
The Improvement of Automatic Skin Cancer Detection Algorithm Based on CVQ technique
Nowadays, by increasing the number of deaths related to skin cancer, this kind of cancer has been converted as one of the important issues in humans' life. However, the main key is early detection of skin cancer in order to save the life of people. By considering this fact that there is a near similarity between cancer moles and normal ones, attention to artificial systems with the ability of distinguishing between these kinds of moles can be very important, undoubtedly. The accuracy of this kind of system must be considered in order to find better results, especially in the cases which are related to human‘s life. In this paper, with regard to the fact that the raising of a kind of skin cancer, Melanoma, has increasing, we have employed neural networks in the aim of function improvement of an approach based on compressed image technique, namely, Classified Vector Quantization (CVQ) technique. This suggested method has been examined on some images and the results show that this method is a proper way in order to automatic skin cancer detection
- …