27 research outputs found

    A New framework for an electrophotographic printer model

    Get PDF
    Digital halftoning is a printing technology that creates the illusion of continuous tone images for printing devices such as electrophotographic printers that can only produce a limited number of tone levels. Digital halftoning works because the human visual system has limited spatial resolution which blurs the printed dots of the halftone image, creating the gray sensation of a continuous tone image. Because the printing process is imperfect it introduces distortions to the halftone image. The quality of the printed image depends, among other factors, on the complex interactions between the halftone image, the printer characteristics, the colorant, and the printing substrate. Printer models are used to assist in the development of new types of halftone algorithms that are designed to withstand the effects of printer distortions. For example, model-based halftone algorithms optimize the halftone image through an iterative process that integrates a printer model within the algorithm. The two main goals of a printer model are to provide accurate estimates of the tone and of the spatial characteristics of the printed halftone pattern. Various classes of printer models, from simple tone calibrations, to complex mechanistic models, have been reported in the literature. Existing models have one or more of the following limiting factors: they only predict tone reproduction, they depend on the halftone pattern, they require complex calibrations or complex calculations, they are printer specific, they reproduce unrealistic dot structures, and they are unable to adapt responses to new data. The two research objectives of this dissertation are (1) to introduce a new framework for printer modeling and (2) to demonstrate the feasibility of such a framework in building an electrophotographic printer model. The proposed framework introduces the concept of modeling a printer as a texture transformation machine. The basic premise is that modeling the texture differences between the output printed images and the input images encompasses all printing distortions. The feasibility of the framework was tested with a case study modeling a monotone electrophotographic printer. The printer model was implemented as a bank of feed-forward neural networks, each one specialized in modeling a group of textural features of the printed halftone pattern. The textural features were obtained using a parametric representation of texture developed from a multiresolution decomposition proposed by other researchers. The textural properties of halftone patterns were analyzed and the key texture parameters to be modeled by the bank were identified. Guidelines for the multiresolution texture decomposition and the model operational parameters and operational limits were established. A method for the selection of training sets based on the morphological properties of the halftone patterns was also developed. The model is fast and has the capability to continue to learn with additional training. The model can be easily implemented because it only requires a calibrated scanner. The model was tested with halftone patterns representing a range of spatial characteristics found in halftoning. Results show that the model provides accurate predictions for the tone and the spatial characteristics when modeling halftone patterns individually and it provides close approximations when modeling multiple halftone patterns simultaneously. The success of the model justifies continued research of this new printer model framework

    Simulation of an electrophotographic halftone reproduction

    Get PDF
    The robustness of three digital halftoning techniques are simulated for a hypothetical electrophotographic laser printer subjected to dynamic environmental conditions over a copy run of one thousand images. Mathematical electrophotographic models have primarily concentrated on solid area reproductions under time-invariant conditions. The models used in this study predict the behavior of complex image distributions at various stages in the electrophotographic process. The system model is divided into seven subsystems: Halftoning, Laser Exposure, Photoconductor Discharge, Toner Development, Transfer, Fusing, and Image Display. Spread functions associated with laser spot intensity, charge migration, and toner transfer and fusing are used to predict the electrophotographic system response for continuous and halftone reproduction. Many digital halftoning techniques have been developed for converting from continuous-tone to binary (halftone) images. The general objective of halftoning is to approximate the intermediate gray levels of continuous tone images with a binary (black-and-white) imaging system. Three major halftoning techniques currently used are Ordered-Dither, Cluster-Dot, and Error Diffusion. These halftoning algorithms are included in the simulation model. Simulation in electrophotography can be used to better understand the relationship between electrophotographic parameters and image quality, and to observe the effects of time-variant degradation on electrophotographic parameters and materials. Simulation programs, written in FORTRAN and SLAM (Simulation Language Alternative Modeling), have been developed to investigate the effects of system degradation on halftone image quality. The programs have been designed for continuous simulation to characterize the behavior or condition of the electrophotographic system. The simulation language provides the necessary algorithms for obtaining values for the variables described by the time-variant equations, maintaining a history of values during the simulation run, and reporting statistical information on time-dependent variables. Electrophotographic variables associated with laser intensity, initial photoconductor surface voltage, and residual voltage are degraded over a simulated run of one thousand copies. These results are employed to predict the degraded electrophotographic system response and to investigate the behavior of the various halftone techniques under dynamic system conditions. Two techniques have been applied to characterize halftone image quality: Tone Reproduction Curves are used to characterize and record the tone reproduction capability of an electrophotographic system over a simulated copy run. Density measurements are collected and statistical inferences drawn using SLAM. Typically the sharpness of an image is characterized by a system modulation transfer function (MTF). The mathematical models used to describe the subsystem transforms of an electrophotographic system involve non-linear functions. One means for predicting this non-linear system response is to use a Chirp function as the input to the model and then to compare the reproduced modulation to that of the original. Since the imaging system is non-linear, the system response cannot be described by an MTF, but rather an Input Response Function. This function was used to characterize the robustness of halftone patterns at various frequencies. Simulated images were also generated throughout the simulation run and used to evaluate image sharpness and resolution. The data, generated from each of the electrophotographic simulation models, clearly indicates that image stability and image sharpness is not influenced by dot orientation, but rather by the type of halftoning operation used. Error-Diffusion is significantly more variable than Clustered-Dot and Dispersed-Dot at low to mid densities. However, Error-Diffusion is significantly less variable than the ordered dither patterns at high densities. Also, images generated from Error-Diffusion are sharper than those generated using Clustered-Dot and Dispersed-Dot techniques, but the resolution capability of each of the techniques remained the same and degraded equally for each simulation run

    Characterization and identification of printed objects

    Get PDF
    A study about the physical appearance of pre-photographic, photomechanical, photographic and digital positive reflective prints was made, relating the obtained images with the history, materials and technology used to create them. The studied samples are from the Image Permanence Institute (IPI) study collection. The digital images were obtained using a digital SLR on a copystand and a compound light microscope, with different lighting angles (0Âș, 45Âșand 90Âș) and magnifications from overall views on the copystand down to a 20x objective lens on the microscope. Most of these images were originally created by IPI for www.digitalsamplebook.org, a web tool for teaching print identification, and will be used on the www.graphicsatlas.org website, along with textual information on identification, technology and history information about these reproduction processes

    Estimating toner usage with laser electrophotographic printers, and object map generation from raster input image

    Get PDF
    Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. In Part 1, we propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our two-stage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.^ In Part 2, we describe a raster-input-based object map generation algorithm (OMGA) for laser, electrophotographic (EP) printers. The object map is utilized in the object-oriented halftoning approach, where different halftone screens and color maps are applied to different types of objects on the page in order to improve the overall printing quality. The OMGA generates object map from the raster input directly. It solves problems such as the object map obtained from the page description language (PDL) is incorrect, and an initial object map is unavailable from the processing pipeline. A new imaging pipeline for the laser EP printer incorporating both the OMGA and the object-oriented halftoning approach is proposed. The OMGA is a segmentation-based classification approach. It first detects objects according to the edge information, and then classifies the objects by analyzing the feature values extracted from the contour and the interior of each object. The OMGA is designed to be hardware-friendly, and can be implemented within two passes through the input document

    Currency security and forensics: a survey

    Get PDF
    By its definition, the word currency refers to an agreed medium for exchange, a nation’s currency is the formal medium enforced by the elected governing entity. Throughout history, issuers have faced one common threat: counterfeiting. Despite technological advancements, overcoming counterfeit production remains a distant future. Scientific determination of authenticity requires a deep understanding of the raw materials and manufacturing processes involved. This survey serves as a synthesis of the current literature to understand the technology and the mechanics involved in currency manufacture and security, whilst identifying gaps in the current literature. Ultimately, a robust currency is desire

    Automated Algorithm for the Identification of Artifacts in Mottled and Noisy Images

    Get PDF
    We describe a method for automatically classifying image-quality defects on printed documents. The proposed approach accepts a scanned image where the defect has been localized a priori and performs several appropriate image processing steps to reveal the region of interest. A mask is then created from the exposed region to identify bright outliers. Morphological reconstruction techniques are then applied to emphasize relevant local attributes. The classification of the defects is accomplished via a customized tree classifier that utilizes size or shape attributes at corresponding nodes to yield appropriate binary decisions. Applications of this process include automated/assisted diagnosis and repair of printers/copiers in the field in a timely fashion. The proposed technique was tested on a database of 276 images of synthetic and real-life defects with 94.95% accuracy

    Attacking and Defending Printer Source Attribution Classifiers in the Physical Domain

    Get PDF
    The security of machine learning classifiers has received increasing attention in the last years. In forensic applications, guaranteeing the security of the tools investigators rely on is crucial, since the gathered evidence may be used to decide about the innocence or the guilt of a suspect. Several adversarial attacks were proposed to assess such security, with a few works focusing on transferring such attacks from the digital to the physical domain. In this work, we focus on physical domain attacks against source attribution of printed documents. We first show how a simple reprinting attack may be sufficient to fool a model trained on images that were printed and scanned only once. Then, we propose a hardened version of the classifier trained on the reprinted attacked images. Finally, we attack the hardened classifier with several attacks, including a new attack based on the Expectation Over Transformation approach, which finds the adversarial perturbations by simulating the physical transformations occurring when the image attacked in the digital domain is printed again. The results we got demonstrate a good capability of the hardened classifier to resist attacks carried out in the physical domai

    Automatic image registration and defect identification of a class of structural artifacts in printed documents

    Get PDF
    The work in this thesis proposes a defect analysis system, which automatically aligns a digitized copy of a printed output to a reference electronic original and highlights image defects. We focus on a class of image defects or artifacts caused by shortfalls in the mechanical or electro-photographic processes that include spots, deletions and debris missing deletions. The algorithm begins with image registration performed using a logpolar transformation and mutual information techniques. A confidence map is then calculated by comparing the contrast and entropy in the neighborhood of each pixel in both the printed document and corresponding electronic original. This results in a qualitative difference map of the two images highlighting the detected defects. The algorithm was demonstrated successfully on a collection of 99 printed images based on 11 original electronic images and test patterns printed on 9 different faulty printers provided by Xerox Corporation. The proposed algorithm is effective in aligning digitized printed output irrespective of translation, rotation and scale variations, and identifying defects in color inconsistent hardcopies

    Deep learning for printed document source identification

    Get PDF
    Karena perkembangan teknologi informasi yang sangat pesat dan penggunaan internet yang luas, Informasi dengan mudah diperoleh dalam bentuk format digital. Konten digital dapat dengan bebas dicetak ke dalam dokumen karena kemudahan dan aksesibilitas printer. Di sisi lain, dokumen tercetak dapat dimanipulasi secara ilegal oleh beberapa masalah kriminal seperti: dokumen palsu, mata uang palsu, pelanggaran hak cipta, dan sebagainya. Oleh karena itu, bagaimana mengembangkan alat pengujian keamanan yang efisien dan tepat untuk mengidentifikasi sumber dokumen tercetak merupakan tugas penting untuk sementara. Saat ini, sistem forensik dengan menggunakan metode statistik dan dukungan teknologi mesin vektor telah mampu mengidentifikasi sumber printer untuk dokumen teks dan gambar. Pendekatan semacam itu termasuk dalam kategori pembelajaran mesin dangkal dengan interaksi manusia selama tahap ekstraksi fitur, pemilihan fitur, dan pra-pemrosesan data. Dalam makalah ini, sistem deep learning untuk memecahkan masalah klasifikasi citra yang kompleks dikembangkan oleh Convolutional Neural Networks (CNNs) dari deep learning yang dapat mempelajari fitur secara otomatis. Eksperimen sistematis telah dilakukan untuk kedua sistem. Untuk dokumen mikroskopis, sistem SVM berbasis fitur mengungguli sistem pembelajaran mendalam dengan celah terbatas. Untuk dokumen yang dipindai, kedua sistem dapat mencapai hasil yang sama baiknya dengan akurasi yang tinggi. Kedua sistem harus terus dievaluasi dan dibandingkan untuk kepentingan terbaik dalam pemanfaatan universal
    corecore