83 research outputs found

    Improved methods and system for watermarking halftone images

    Get PDF
    Watermarking is becoming increasingly important for content control and authentication. Watermarking seamlessly embeds data in media that provide additional information about that media. Unfortunately, watermarking schemes that have been developed for continuous tone images cannot be directly applied to halftone images. Many of the existing watermarking methods require characteristics that are implicit in continuous tone images, but are absent from halftone images. With this in mind, it seems reasonable to develop watermarking techniques specific to halftones that are equipped to work in the binary image domain. In this thesis, existing techniques for halftone watermarking are reviewed and improvements are developed to increase performance and overcome their limitations. Post-halftone watermarking methods work on existing halftones. Data Hiding Cell Parity (DHCP) embeds data in the parity domain instead of individual pixels. Data Hiding Mask Toggling (DHMT) works by encoding two bits in the 2x2 neighborhood of a pseudorandom location. Dispersed Pseudorandom Generator (DPRG), on the other hand, is a preprocessing step that takes place before image halftoning. DPRG disperses the watermark embedding locations to achieve better visual results. Using the Modified Peak Signal-to-Noise Ratio (MPSNR) metric, the proposed techniques outperform existing methods by up to 5-20%, depending on the image type and method considered. Field programmable gate arrays (FPGAs) are ideal for solutions that require the flexibility of software, while retaining the performance of hardware. Using VHDL, an FPGA based halftone watermarking engine was designed and implemented for the Xilinx Virtex XCV300. This system was designed for watermarking pre-existing halftones and halftones obtained from grayscale images. This design utilizes 99% of the available FPGA resources and runs at 33 MHz. Such a design could be applied to a scanner or printer at the hardware level without adversely affecting performance

    A New framework for an electrophotographic printer model

    Get PDF
    Digital halftoning is a printing technology that creates the illusion of continuous tone images for printing devices such as electrophotographic printers that can only produce a limited number of tone levels. Digital halftoning works because the human visual system has limited spatial resolution which blurs the printed dots of the halftone image, creating the gray sensation of a continuous tone image. Because the printing process is imperfect it introduces distortions to the halftone image. The quality of the printed image depends, among other factors, on the complex interactions between the halftone image, the printer characteristics, the colorant, and the printing substrate. Printer models are used to assist in the development of new types of halftone algorithms that are designed to withstand the effects of printer distortions. For example, model-based halftone algorithms optimize the halftone image through an iterative process that integrates a printer model within the algorithm. The two main goals of a printer model are to provide accurate estimates of the tone and of the spatial characteristics of the printed halftone pattern. Various classes of printer models, from simple tone calibrations, to complex mechanistic models, have been reported in the literature. Existing models have one or more of the following limiting factors: they only predict tone reproduction, they depend on the halftone pattern, they require complex calibrations or complex calculations, they are printer specific, they reproduce unrealistic dot structures, and they are unable to adapt responses to new data. The two research objectives of this dissertation are (1) to introduce a new framework for printer modeling and (2) to demonstrate the feasibility of such a framework in building an electrophotographic printer model. The proposed framework introduces the concept of modeling a printer as a texture transformation machine. The basic premise is that modeling the texture differences between the output printed images and the input images encompasses all printing distortions. The feasibility of the framework was tested with a case study modeling a monotone electrophotographic printer. The printer model was implemented as a bank of feed-forward neural networks, each one specialized in modeling a group of textural features of the printed halftone pattern. The textural features were obtained using a parametric representation of texture developed from a multiresolution decomposition proposed by other researchers. The textural properties of halftone patterns were analyzed and the key texture parameters to be modeled by the bank were identified. Guidelines for the multiresolution texture decomposition and the model operational parameters and operational limits were established. A method for the selection of training sets based on the morphological properties of the halftone patterns was also developed. The model is fast and has the capability to continue to learn with additional training. The model can be easily implemented because it only requires a calibrated scanner. The model was tested with halftone patterns representing a range of spatial characteristics found in halftoning. Results show that the model provides accurate predictions for the tone and the spatial characteristics when modeling halftone patterns individually and it provides close approximations when modeling multiple halftone patterns simultaneously. The success of the model justifies continued research of this new printer model framework

    Degree of quantization and spatial addressability tradeoffs in perceived quality of color images

    Get PDF
    The objective of this thesis research was to investigate the tradeoffs between the number of quantization levels and spatial addressability of printed color images. Image quantization was done by employing the error-diffusion algorithm. The diffusion of error was performed in CMYK color space. The resulting images were printed on a color output device simulating different spatial addressabilities. To evaluate the perceived image quality, a psychophysical experiment was conducted followed by a statistical analysis of the experimental data. Based on the results of this analysis, the conclusions on the tradeoffs between the number of quantization levels and spatial addressability were drawn. It was determined that the tradeoffs were scene dependent with photographic scenes being able to sustain greater reduction in addressability without perceived image quality being decreased than graphics. The experiment showed that photographic scenes were sufficient to be printed with 5 bits per pixel per color at 100 dots per inch, and graphics with 3 bits per pixel per color at 300 dots per inch. If a single bits per color / dots per inch combination is to be named as the optimum combination equivalent to the best possible image for the given system (8bpc/300dpi), it would have to be 3bpc/300dpi. This combination was found to be equivalent to the quality of the best possible image at the normal viewing distance for all scenes in the experiment

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Grayscale Digital Halftoning Using Optimization Techniques

    Get PDF
    In this paper a complete outline of advanced Digital halftoning techniques is given. This paper explains halftoning from its definition, application of halftoning to different useful techniques which are improved to give a high signal to noise ratio image. Apart from signal to noise ratio another parameter which measures the similarity between two images is also shown in this paper. Additionally the drawback of each method and comparison of the SNR, and SSIM of all methods is also shown in this paper. Error diffusion technique using FS, Stucki and JJN filters is an efficient approach to halftoning. Its main drawback is that it undergoes linear distortion. This paper is completely describing the error diffusion method and the improvements made to error diffusion so as to get a well-defined and a visually pleasing halftone image. However an evolutionary algorithms called as particle swarm optimization and Genetic Algorithm are used to create filters for the image block wise and comparing with that of the image and then finally reconstructing the whole image. In these methods of PSO and GA the cost function is formulated based on the SSIM, average minority pixel distance and the string with the best cost function value is sorted using the Evolutionary algorithms. As the human eye acts as a spatial low pass filter the image which is to be halftoned is filtered through any visual model such as a HVS model and then undergoes through the process of evolutionary algorithms

    An investigation of the color reproduction accuracy of two halftoning algorithms for dot matrix systems

    Get PDF
    The recent popularity of dot matrix printing technologies has renewed interest in developing color halftoning techniques for these systems. A color reproduction scheme based on colorimetric principles would provide accurate color rendition, and can be configured to different hardware implementations. Additionally, where there are demands for multiple copies, color reproduction accuracy is assured to the nth generation. A binary dot matrix halftoning algorithm previously used for black-and-white reproduction (error diffusion) and a new algorithm to be described (EZ method) were investigated in terms of their color reproduction capabilities, with the objective to achieve colorimetric color reproduction. The error diffusion technique made poor system color selections when used in XYZ tristimulus space. As a result, large hue, saturation, and ΔE*ab errors were experienced. The EZ Color Algorithm provided better color accuracy, with an average color difference of less than three for a 4x4 cell size. A uniform color space, such as CIELAB, is considered a minimum requirement in order for the error diffusion algorithm to provide colorimetric color reproduction. Hue, saturation, and ΔE*ab errors were minimized when this color space was used. The EZ Color Algorithm provides several important features including the incorporation of the black colorant explicity in the color formulation, selection of system colors prior to quantizing, and quantization of system color areas instead of reflectance values

    Digital halftoning using fibonacci-like sequence pertubation and using vision-models in different color spaces

    Get PDF
    A disadvantage in error diffusion is that it creates objectionable texture patterns at certain gray levels. An approach, threshold perturbation by Fibonacci-like sequences, was studied. This process is simpler than applying a vision model and it also decreases the visible patterns in error diffusion. Vector error diffusion guarantees minimum color distance in binarization provided that a uniform color space is used. Four color spaces were studied in this research. It was found that vector error diffusion in two linear color spaces made no reduction in the quality of halftoning compared with that in CIEL*a*b* or CIEL*u*v* color spaces. A luminance vision MTF and a chroma vision MTF were used in model-based error diffusion to further improve the halftone image quality

    Near-Lossless Bitonal Image Compression System

    Get PDF
    The main purpose of this thesis is to develop an efficient near-lossless bitonal compression algorithm and to implement that algorithm on a hardware platform. The current methods for compression of bitonal images include the JBIG and JBIG2 algorithms, however both JBIG and JBIG2 have their disadvantages. Both of these algorithms are covered by patents filed by IBM, making them costly to implement commercially. Also, JBIG only provides means for lossless compression while JBIG2 provides lossy methods only for document-type images. For these reasons a new method for introducing loss and controlling this loss to sustain quality is developed. The lossless bitonal image compression algorithm used for this thesis is called Block Arithmetic Coder for Image Compression (BACIC), which can efficiently compress bitonal images. In this thesis, loss is introduced for cases where better compression efficiency is needed. However, introducing loss in bitonal images is especially difficult, because pixels undergo such a drastic change, either from white to black or black to white. Such pixel flipping introduces salt and pepper noise, which can be very distracting when viewing an image. Two methods are used in combination to control the visual distortion introduced into the image. The first is to keep track of the error created by the flipping of pixels, and using this error to decide whether flipping another pixel will cause the visual distortion to exceed a predefined threshold. The second method is region of interest consideration. In this method, lower loss or no loss is introduced into the important parts of an image, and higher loss is introduced into the less important parts. This allows for a good quality image while increasing the compression efficiency. Also, the ability of BACIC to compress grayscale images is studied and BACICm, a multiplanar BACIC algorithm, is created. A hardware implementation of the BACIC lossless bitonal image compression algorithm is also designed. The hardware implementation is done using VHDL targeting a Xilinx FPGA, which is very useful, because of its flexibility. The programmed FPGA could be included in a product of the facsimile or printing industry to handle the compression or decompression internal to the unit, giving it an advantage in the marketplace

    Super scalar high speed 2(mew) N-well MOSIS CMOS digital halftoning processor

    Get PDF
    Digital halftoning is the algorithmic process for converting electronic images into bitonal images that preserves the perception of a continuous-tone image. Various digital halftoning algorithms were considered in the development of this processor on the basis of quality of image and amenability for VLSI implementation. An error-diffusion algorithm with the options of noise encoding, printer model adjustment, and edge enhancement was chosen for implementation. Since the algorithm allows for multiple independent parallel processors to operate on the same image, the system is capable of super scalar processing. The processor is intended for an 8-bit input (256 gray levels). The processor was designed using a 2\x. N-well MOSIS CMOS process. The expected processor speed for that process is about 21 million pixels/sec. The processing speed was enhanced by using Double Pass Transistor Logic implementation on all logic components used in the processor

    FPGA BASED PARALLEL IMPLEMENTATION OF STACKED ERROR DIFFUSION ALGORITHM

    Get PDF
    Digital halftoning is a crucial technique used in digital printers to convert a continuoustone image into a pattern of black and white dots. Halftoning is used since printers have a limited availability of inks and cannot reproduce all the color intensities in a continuous image. Error Diffusion is an algorithm in halftoning that iteratively quantizes pixels in a neighborhood dependent fashion. This thesis focuses on the development and design of a parallel scalable hardware architecture for high performance implementation of a high quality Stacked Error Diffusion algorithm. The algorithm is described in ‘C’ and requires a significant processing time when implemented on a conventional CPU. Thus, a new hardware processor architecture is developed to implement the algorithm and is implemented to and tested on a Xilinx Virtex 5 FPGA chip. There is an extraordinary decrease in the run time of the algorithm when run on the newly proposed parallel architecture implemented to FPGA technology compared to execution on a single CPU. The new parallel architecture is described using the Verilog Hardware Description Language. Post-synthesis and post-implementation, performance based Hardware Description Language (HDL), simulation validation of the new parallel architecture is achieved via use of the ModelSim CAD simulation tool
    corecore