38 research outputs found

    New methods for digital halftoning and inverse halftoning

    Get PDF
    Halftoning is the rendition of continuous-tone pictures on bi-level displays. Here we first review some of the halftoning algorithms which have a direct bearing on our paper and then describe some of the more recent advances in the field. Dot diffusion halftoning has the advantage of pixel-level parallelism, unlike the popular error diffusion halftoning method. We first review the dot diffusion algorithm and describe a recent method to improve its image quality by taking advantage of the Human Visual System function. Then we discuss the inverse halftoning problem: The reconstruction of a continuous tone image from its halftone. We briefly review the methods for inverse halftoning, and discuss the advantages of a recent algorithm, namely, the Look Up Table (LUT)Method. This method is extremely fast and achieves image quality comparable to that of the best known methods. It can be applied to any halftoning scheme. We then introduce LUT based halftoning and tree-structured LUT (TLUT)halftoning. We demonstrate how halftone image quality in between that of error diffusion and Direct Binary Search (DBS)can be achieved depending on the size of tree structure in TLUT algorithm while keeping the complexity of the algorithm much lower than that of DBS

    Apparent Quality of Alternative Halftone Screening When Compared to Conventional Screening in Commmercial Offset Lithography

    Get PDF
    ABSTRACT Printers are still concerned with craftsmanship and are always looking for means to produce faster print jobs with improved quality. The invention of new halftone screening techniques is one of the methods imaging companies have used as an attempt to improve the quality of the printed piece. These techniques can possibly improve the aesthetic qualities and fidelity of printed reproductions, therefore printers and students of printing need to study these techniques to ensure that the benefits outweigh the costs of implementation. This experimental study was conducted to measure the quality of printed halftones that were screened with three different dot structures; conventional, alternative (XM) at 240 lpi, and alternative (XM) at 340 lpi. The printing of the halftones and tone scales was completed using accepted printing practices. The analysis was focused on two questions; is there a difference in the tone scales created with the use of the alternative screening when measured with print industry equipment, and is there an improvement of the apparent quality of the halftones when evaluated by members of the print community and laypersons? With the use of a densitometer and a spectrophotometer, the tint patches and tone scales were measured to determine a difference in color, density, print contrast, and dot area. Through statistical analysis, it was determined that a significant difference was created with the use of different screenings. The Delta E values were also calculated with the collected CIELab measures. Delta E is the measure of the color difference between two colors. If the value calculated is above two and a half or three, then the difference should be perceptible by the human eye. Overwhelmingly, the Delta E values show no humanly perceptible difference. When evaluating the apparent quality of the halftones, many people reported that they saw no difference; the average number was thirty-two percent for printers and forty-four percent for non-printers. The participants who did perceive higher quality in one versus the other were fairly equally spread across the three screening methods and quality factors. Therefore, the only conclusion that can be drawn from this research is there is a measurable difference in the screening methods but the difference is humanly imperceptible and is not commercially significant for commercial offset lithography

    A New framework for an electrophotographic printer model

    Get PDF
    Digital halftoning is a printing technology that creates the illusion of continuous tone images for printing devices such as electrophotographic printers that can only produce a limited number of tone levels. Digital halftoning works because the human visual system has limited spatial resolution which blurs the printed dots of the halftone image, creating the gray sensation of a continuous tone image. Because the printing process is imperfect it introduces distortions to the halftone image. The quality of the printed image depends, among other factors, on the complex interactions between the halftone image, the printer characteristics, the colorant, and the printing substrate. Printer models are used to assist in the development of new types of halftone algorithms that are designed to withstand the effects of printer distortions. For example, model-based halftone algorithms optimize the halftone image through an iterative process that integrates a printer model within the algorithm. The two main goals of a printer model are to provide accurate estimates of the tone and of the spatial characteristics of the printed halftone pattern. Various classes of printer models, from simple tone calibrations, to complex mechanistic models, have been reported in the literature. Existing models have one or more of the following limiting factors: they only predict tone reproduction, they depend on the halftone pattern, they require complex calibrations or complex calculations, they are printer specific, they reproduce unrealistic dot structures, and they are unable to adapt responses to new data. The two research objectives of this dissertation are (1) to introduce a new framework for printer modeling and (2) to demonstrate the feasibility of such a framework in building an electrophotographic printer model. The proposed framework introduces the concept of modeling a printer as a texture transformation machine. The basic premise is that modeling the texture differences between the output printed images and the input images encompasses all printing distortions. The feasibility of the framework was tested with a case study modeling a monotone electrophotographic printer. The printer model was implemented as a bank of feed-forward neural networks, each one specialized in modeling a group of textural features of the printed halftone pattern. The textural features were obtained using a parametric representation of texture developed from a multiresolution decomposition proposed by other researchers. The textural properties of halftone patterns were analyzed and the key texture parameters to be modeled by the bank were identified. Guidelines for the multiresolution texture decomposition and the model operational parameters and operational limits were established. A method for the selection of training sets based on the morphological properties of the halftone patterns was also developed. The model is fast and has the capability to continue to learn with additional training. The model can be easily implemented because it only requires a calibrated scanner. The model was tested with halftone patterns representing a range of spatial characteristics found in halftoning. Results show that the model provides accurate predictions for the tone and the spatial characteristics when modeling halftone patterns individually and it provides close approximations when modeling multiple halftone patterns simultaneously. The success of the model justifies continued research of this new printer model framework

    New methods for digital halftoning and inverse halftoning

    Get PDF
    Halftoning is the rendition of continuous-tone pictures on bi-level displays. Here we first review some of the halftoning algorithms which have a direct bearing on our paper and then describe some of the more recent advances in the field. Dot diffusion halftoning has the advantage of pixel-level parallelism, unlike the popular error diffusion halftoning method. We first review the dot diffusion algorithm and describe a recent method to improve its image quality by taking advantage of the Human Visual System function. Then we discuss the inverse halftoning problem: The reconstruction of a continuous tone image from its halftone. We briefly review the methods for inverse halftoning, and discuss the advantages of a recent algorithm, namely, the Look Up Table (LUT)Method. This method is extremely fast and achieves image quality comparable to that of the best known methods. It can be applied to any halftoning scheme. We then introduce LUT based halftoning and tree-structured LUT (TLUT)halftoning. We demonstrate how halftone image quality in between that of error diffusion and Direct Binary Search (DBS)can be achieved depending on the size of tree structure in TLUT algorithm while keeping the complexity of the algorithm much lower than that of DBS

    High Capacity Analog Channels for Smart Documents

    Get PDF
    Widely-used valuable hardcopy documents such as passports, visas, driving licenses, educational certificates, entrance-passes for entertainment events etc. are conventionally protected against counterfeiting and data tampering attacks by applying analog security technologies (e.g. KINEGRAMS®, holograms, micro-printing, UV/IR inks etc.). How-ever, easy access to high quality, low price modern desktop publishing technology has left most of these technologies ineffective, giving rise to high quality false documents. The higher price and restricted usage are other drawbacks of the analog document pro-tection techniques. Digital watermarking and high capacity storage media such as IC-chips, optical data stripes etc. are the modern technologies being used in new machine-readable identity verification documents to ensure contents integrity; however, these technologies are either expensive or do not satisfy the application needs and demand to look for more efficient document protection technologies. In this research three different high capacity analog channels: high density data stripe (HD-DataStripe), data hiding in printed halftone images (watermarking), and super-posed constant background grayscale image (CBGI) are investigated for hidden com-munication along with their applications in smart documents. On way to develop high capacity analog channels, noise encountered from printing and scanning (PS) process is investigated with the objective to recover the digital information encoded at nearly maximum channel utilization. By utilizing noise behaviour, countermeasures against the noise are taken accordingly in data recovery process. HD-DataStripe is a printed binary image similar to the conventional 2-D barcodes (e.g. PDF417), but it offers much higher data storage capacity and is intended for machine-readable identity verification documents. The capacity offered by the HD-DataStripe is sufficient to store high quality biometric characteristics rather than extracted templates, in addition to the conventional bearer related data contained in a smart ID-card. It also eliminates the need for central database system (except for backup record) and other ex-pensive storage media, currently being used. While developing novel data-reading tech-nique for HD-DataStripe, to count for the unavoidable geometrical distortions, registra-tion marks pattern is chosen in such a way so that it results in accurate sampling points (a necessary condition for reliable data recovery at higher data encoding-rate). For more sophisticated distortions caused by the physical dot gain effects (intersymbol interfer-ence), the countermeasures such as application of sampling theorem, adaptive binariza-tion and post-data processing, each one of these providing only a necessary condition for reliable data recovery, are given. Finally, combining the various filters correspond-ing to these countermeasures, a novel Data-Reading technique for HD-DataStripe is given. The novel data-reading technique results in superior performance than the exist-ing techniques, intended for data recovery from printed media. In another scenario a small-size HD-DataStripe with maximum entropy is used as a copy detection pattern by utilizing information loss encountered at nearly maximum channel capacity. While considering the application of HD-DataStripe in hardcopy documents (contracts, official letters etc.), unlike existing work [Zha04], it allows one-to-one contents matching and does not depend on hash functions and OCR technology, constraints mainly imposed by the low data storage capacity offered by the existing analog media. For printed halftone images carrying hidden information higher capacity is mainly attributed to data-reading technique for HD-DataStripe that allows data recovery at higher printing resolution, a key requirement for a high quality watermarking technique in spatial domain. Digital halftoning and data encoding techniques are the other factors that contribute to data hiding technique given in this research. While considering security aspects, the new technique allows contents integrity and authenticity verification in the present scenario in which certain amount of errors are unavoidable, restricting the usage of existing techniques given for digital contents. Finally, a superposed constant background grayscale image, obtained by the repeated application of a specially designed small binary pattern, is used as channel for hidden communication and it allows up to 33 pages of A-4 size foreground text to be encoded in one CBGI. The higher capacity is contributed from data encoding symbols and data reading technique

    ID Photograph hashing : a global approach

    No full text
    This thesis addresses the question of the authenticity of identity photographs, part of the documents required in controlled access. Since sophisticated means of reproduction are publicly available, new methods / techniques should prevent tampering and unauthorized reproduction of the photograph. This thesis proposes a hashing method for the authentication of the identity photographs, robust to print-and-scan. This study focuses also on the effects of digitization at hash level. The developed algorithm performs a dimension reduction, based on independent component analysis (ICA). In the learning stage, the subspace projection is obtained by applying ICA and then reduced according to an original entropic selection strategy. In the extraction stage, the coefficients obtained after projecting the identity image on the subspace are quantified and binarized to obtain the hash value. The study reveals the effects of the scanning noise on the hash values of the identity photographs and shows that the proposed method is robust to the print-and-scan attack. The approach focusing on robust hashing of a restricted class of images (identity) differs from classical approaches that address any imageCette thèse traite de la question de l’authenticité des photographies d’identité, partie intégrante des documents nécessaires lors d’un contrôle d’accès. Alors que les moyens de reproduction sophistiqués sont accessibles au grand public, de nouvelles méthodes / techniques doivent empêcher toute falsification / reproduction non autorisée de la photographie d’identité. Cette thèse propose une méthode de hachage pour l’authentification de photographies d’identité, robuste à l’impression-lecture. Ce travail met ainsi l’accent sur les effets de la numérisation au niveau de hachage. L’algorithme mis au point procède à une réduction de dimension, basée sur l’analyse en composantes indépendantes (ICA). Dans la phase d’apprentissage, le sous-espace de projection est obtenu en appliquant l’ICA puis réduit selon une stratégie de sélection entropique originale. Dans l’étape d’extraction, les coefficients obtenus après projection de l’image d’identité sur le sous-espace sont quantifiés et binarisés pour obtenir la valeur de hachage. L’étude révèle les effets du bruit de balayage intervenant lors de la numérisation des photographies d’identité sur les valeurs de hachage et montre que la méthode proposée est robuste à l’attaque d’impression-lecture. L’approche suivie en se focalisant sur le hachage robuste d’une classe restreinte d’images (d’identité) se distingue des approches classiques qui adressent une image quelconqu

    Signal processing based solutions for holographic displays that use binary spatial light modulators

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Science of Bilkent University, 2012.Thesis (Ph. D.) -- Bilkent University, 2012.Includes bibliographical references leaves 141-156.Holography is a promising method to realize satisfactory quality threedimensional (3D) video displays. Spatial light modulators (SLM) are used in holographic video displays. Usually SLMs with higher dynamic ranges are preferred. But currently existing multilevel SLMs have important drawbacks. Some of the associated problems can be avoided by using binary SLMs, if their low dynamic range is compensated for by using appropriate signal processing techniques. In the first solution, the complex-valued gray level SLM patterns that synthesize light fields specified in the non-far-field range are halftoned into binary SLM patterns by solving two decoupled real-valued constrained halftoning problems. As the synthesis region, a sufficiently small sub-region of the central diffraction order region of the SLM is chosen such that the halftoning error is acceptable. The light fields are synthesized merely after free space propagation from the SLM plane and no other complicated optical setups are needed. In this respect, the theory of halftoning for ordinary real-valued gray scale images is extended to complex-valued holograms. Simulation results indicate that light fields that are given either on a plane or within a volume can be successfully synthesized by our approach. In the second solution, a new full complex-valued combined SLM is effectively created by forming a properly weighted superposition of a number of binary SLMs where the superposition weights can be complex-valued. The method is a generalization of the well known concepts of bit plane decomposition and representation for ordinary images and actually involves a trade-off between dynamic range and pixel count. The coverage of the complex plane by the complex values that can be generated is much more satisfactory than that is achieved by those methods available in the literature. The design is also easy to customize for any operation wavelength. As a result, we show that binary SLMs, with their robust nature, can be used for holographic video display designsUlusoy, ErdemPh.D

    A Comparison study of input scanning resolution requirements for AM and FM screening

    Get PDF
    The advent of computers and their impact on the graphic arts and printing industry has, and will continue to, change the methodology of working and workflow in prepress operations. The conversion of analog materials (prints, artwork, transparencies, studio work) into a digital format requires the use of scanners or digital cameras, coupled with the knowledge of output requirements as related to client expectations. The chosen input sampling ratio (sampling rate in relation to halftone screening) impacts output quality, as well as many aspects of prepress workflow efficiency. The ability to predict printed results begins with the correct conversion of originals into digital information and then an appropriate conversion into the output materials for the intended press condition. This conversion of originals into digital information can be broken down into four general components. First, the image must be scanned to the size of the final output. Second, the input sampling ratio must be determined, in relation to the screening requirements of the job. This ratio should be appropriate to the needs of the printing condition for the final press sheet. Third, the highlight, highlight to midtone and shadow placement points must be determined in order to achieve the correct tone reproduction. Fourth, decisions must be made as to the image correction system to be employed in order to obtain consistent digital files from the scanner and prepress workflow. Factors relating to image correction and enhancement include such details as gray balance, color cast correction, dot gain, ink trapping, hue error, unsharp masking, all areas that impact quality. These are generally applied from within software packages that work with the scanner, or from within image manipulation software after the digital conversion is complete. The question of what is the necessary input sampling ratio for traditional AM screening has traditionally been based on the Nyquist Sampling Theorem. The basis for determining input sampling ratio requirements for frequency modulated (FM) screening is less clear. The Nyquist Theorem (originally from electrical engineering and communications research) has been applied to the graphic arts, leading to the general acceptance of a standard 2:1 ratio for most prepress scanning work. The ratio means that the sampling rate should be twice the screen frequency. This thesis set out to determine if there are dif ferences in input sampling ratio scanning requirements, based on the screen frequency rx selection (lOOlpi AM, 1751pi AM and 21|lFM used in this study), when generating films and/or plates for printing, that might question this interpretation of the Nyquist Sam pling Theorem as it relates to the graphic arts. Five images were tonally balanced over three different screening frequencies and six different sampling ratios. A reference image was generated for each condition using the Nyquist Sampling ratio of 2:1. Observers were then asked to rate the images in terms of quality against the standard. Statistical analysis was then applied to the data in order to observe interactions, similarities and differences. A pilot study was first run in order to determine the amount of unsharp masking to use on the images that would be manipulated in the main study. Seven images were pre sented from which four were selected for the final study. Thirty observers were asked for their preference on the amount of sharpening to use. It was found that for this condition (7 images) observers preferred the same amount of sharpening for the 1751pi AM and 21u FM screens, but slightly more sharpening for the lOOlpi AM screen. This information was then applied to the main study images. An additional image previously published was added after the pilot study, as it contained elements not found in the other images The unsharp masking applied to this image was the same as at the time of publication. The main study focused on the interaction of image type, screen frequency and varia tions of input scanner sampling ratios as it relates to output. The results indicated that image type, sampling ratio, sampling ratio - frequency interaction were factors, but fre quency alone was not. However, viewing the interaction chart of frequency and sampling ratio for the 1751pi AM and 21u FM screens alone, an insignificant difference was indi cated (at a 95% confidence level). The conclusion can therefore be drawn that at the higher screen frequencies tested in this study, viewer observations showed that the input sampling ratios should be the same for 1751pi and 21)1 FM screens. Continuous tone orginals should be scanned at a sam pling ratio of 1.75:1. This answered the question of whether FM screening technology can withstand a reduced input sampling ratio and maintain quality, which this study finds cannot. At the lower screen ruling of lOOlpi the input scanner sampling ratio requirement, based on viewer preferences of the five images presented, can be reduced to a 1.5:1
    corecore