33 research outputs found

    Taming Reversible Halftoning via Predictive Luminance

    Full text link
    Traditional halftoning usually drops colors when dithering images with binary dots, which makes it difficult to recover the original color information. We proposed a novel halftoning technique that converts a color image into a binary halftone with full restorability to its original version. Our novel base halftoning technique consists of two convolutional neural networks (CNNs) to produce the reversible halftone patterns, and a noise incentive block (NIB) to mitigate the flatness degradation issue of CNNs. Furthermore, to tackle the conflicts between the blue-noise quality and restoration accuracy in our novel base method, we proposed a predictor-embedded approach to offload predictable information from the network, which in our case is the luminance information resembling from the halftone pattern. Such an approach allows the network to gain more flexibility to produce halftones with better blue-noise quality without compromising the restoration quality. Detailed studies on the multiple-stage training method and loss weightings have been conducted. We have compared our predictor-embedded method and our novel method regarding spectrum analysis on halftone, halftone accuracy, restoration accuracy, and the data embedding studies. Our entropy evaluation evidences our halftone contains less encoding information than our novel base method. The experiments show our predictor-embedded method gains more flexibility to improve the blue-noise quality of halftones and maintains a comparable restoration quality with a higher tolerance for disturbances.Comment: to be published in IEEE Transactions on Visualization and Computer Graphic

    Improved methods and system for watermarking halftone images

    Get PDF
    Watermarking is becoming increasingly important for content control and authentication. Watermarking seamlessly embeds data in media that provide additional information about that media. Unfortunately, watermarking schemes that have been developed for continuous tone images cannot be directly applied to halftone images. Many of the existing watermarking methods require characteristics that are implicit in continuous tone images, but are absent from halftone images. With this in mind, it seems reasonable to develop watermarking techniques specific to halftones that are equipped to work in the binary image domain. In this thesis, existing techniques for halftone watermarking are reviewed and improvements are developed to increase performance and overcome their limitations. Post-halftone watermarking methods work on existing halftones. Data Hiding Cell Parity (DHCP) embeds data in the parity domain instead of individual pixels. Data Hiding Mask Toggling (DHMT) works by encoding two bits in the 2x2 neighborhood of a pseudorandom location. Dispersed Pseudorandom Generator (DPRG), on the other hand, is a preprocessing step that takes place before image halftoning. DPRG disperses the watermark embedding locations to achieve better visual results. Using the Modified Peak Signal-to-Noise Ratio (MPSNR) metric, the proposed techniques outperform existing methods by up to 5-20%, depending on the image type and method considered. Field programmable gate arrays (FPGAs) are ideal for solutions that require the flexibility of software, while retaining the performance of hardware. Using VHDL, an FPGA based halftone watermarking engine was designed and implemented for the Xilinx Virtex XCV300. This system was designed for watermarking pre-existing halftones and halftones obtained from grayscale images. This design utilizes 99% of the available FPGA resources and runs at 33 MHz. Such a design could be applied to a scanner or printer at the hardware level without adversely affecting performance

    ID Photograph hashing : a global approach

    No full text
    This thesis addresses the question of the authenticity of identity photographs, part of the documents required in controlled access. Since sophisticated means of reproduction are publicly available, new methods / techniques should prevent tampering and unauthorized reproduction of the photograph. This thesis proposes a hashing method for the authentication of the identity photographs, robust to print-and-scan. This study focuses also on the effects of digitization at hash level. The developed algorithm performs a dimension reduction, based on independent component analysis (ICA). In the learning stage, the subspace projection is obtained by applying ICA and then reduced according to an original entropic selection strategy. In the extraction stage, the coefficients obtained after projecting the identity image on the subspace are quantified and binarized to obtain the hash value. The study reveals the effects of the scanning noise on the hash values of the identity photographs and shows that the proposed method is robust to the print-and-scan attack. The approach focusing on robust hashing of a restricted class of images (identity) differs from classical approaches that address any imageCette thèse traite de la question de l’authenticité des photographies d’identité, partie intégrante des documents nécessaires lors d’un contrôle d’accès. Alors que les moyens de reproduction sophistiqués sont accessibles au grand public, de nouvelles méthodes / techniques doivent empêcher toute falsification / reproduction non autorisée de la photographie d’identité. Cette thèse propose une méthode de hachage pour l’authentification de photographies d’identité, robuste à l’impression-lecture. Ce travail met ainsi l’accent sur les effets de la numérisation au niveau de hachage. L’algorithme mis au point procède à une réduction de dimension, basée sur l’analyse en composantes indépendantes (ICA). Dans la phase d’apprentissage, le sous-espace de projection est obtenu en appliquant l’ICA puis réduit selon une stratégie de sélection entropique originale. Dans l’étape d’extraction, les coefficients obtenus après projection de l’image d’identité sur le sous-espace sont quantifiés et binarisés pour obtenir la valeur de hachage. L’étude révèle les effets du bruit de balayage intervenant lors de la numérisation des photographies d’identité sur les valeurs de hachage et montre que la méthode proposée est robuste à l’attaque d’impression-lecture. L’approche suivie en se focalisant sur le hachage robuste d’une classe restreinte d’images (d’identité) se distingue des approches classiques qui adressent une image quelconqu

    Application and Theory of Multimedia Signal Processing Using Machine Learning or Advanced Methods

    Get PDF
    This Special Issue is a book composed by collecting documents published through peer review on the research of various advanced technologies related to applications and theories of signal processing for multimedia systems using ML or advanced methods. Multimedia signals include image, video, audio, character recognition and optimization of communication channels for networks. The specific contents included in this book are data hiding, encryption, object detection, image classification, and character recognition. Academics and colleagues who are interested in these topics will find it interesting to read

    Dynamic hashing technique for bandwidth reduction in image transmission

    Get PDF
    Hash functions are widely used in secure communication systems by generating the message digests for detection of unauthorized changes in the files. Encrypted hashed message or digital signature is used in many applications like authentication to ensure data integrity. It is almost impossible to ensure authentic messages when sending over large bandwidth in highly accessible network especially on insecure channels. Two issues that required to be addressed are the large size of hashed message and high bandwidth. A collaborative approach between encoded hash message and steganography provides a highly secure hidden data. The aim of the research is to propose a new method for producing a dynamic and smaller encoded hash message with reduced bandwidth. The encoded hash message is embedded into an image as a stego-image to avoid additional file and consequently the bandwidth is reduced. The receiver extracts the encoded hash and dynamic hashed message from the received file at the same time. If decoding encrypted hash by public key and hashed message from the original file matches the received file, it is considered as authentic. In enhancing the robustness of the hashed message, we compressed or encoded it or performed both operations before embedding the hashed data into the image. The proposed algorithm had achieved the lowest dynamic size (1 KB) with no fix length of the original file compared to MD5, SHA-1 and SHA-2 hash algorithms. The robustness of hashed message was tested against the substitution, replacement and collision attacks to check whether or not there is any detection of the same message in the output. The results show that the probability of the existence of the same hashed message in the output is closed to 0% compared to the MD5 and SHA algorithms. Amongst the benefits of this proposed algorithm is computational efficiency, and for messages with the sizes less than 1600 bytes, the hashed file reduced the original file up to 8.51%

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    corecore