124 research outputs found

    Laser scanner jitter characterization, page content analysis for optimal rendering, and understanding image graininess

    Get PDF
    In Chapter 1, the electrophotographic (EP) process is widely used in imaging systems such as laser printers and office copiers. In the EP process, laser scanner jitter is a common artifact that mainly appears along the scan direction due to the condition of polygon facets. Prior studies have not focused on the periodic characteristic of laser scanner jitter in terms of the modeling and analysis. This chapter addresses the periodic characteristic of laser scanner jitter in the mathematical model. In the Fourier domain, we derive an analytic expression for laser scanner jitter in general, and extend the expression assuming a sinusoidal displacement. This leads to a simple closed-form expression in terms of Bessel functions of the first kind. We further examine the relationship between the continuous-space halftone image and the periodic laser scanner jitter. The simulation results show that our proposed mathematical model predicts the phenomenon of laser scanner jitter effectively, when compared to the characterization using a test pattern, which consists of a flat field with 25% dot coverage However, there is some mismatches between the analytical spectrum and spectrum of the processed scanned test target. We improve experimental results by directly estimating the displacement instead of assuming a sinusoidal displacement. This gives a better prediction of the phenomenon of laser scanner jitter. ^ In Chapter 2, we describe a segmentation-based object map correction algorithm, which can be integrated in a new imaging pipeline for laser electrophotographic (EP) printers. This new imaging pipeline incorporates the idea of object-oriented halftoning, which applies different halftone screens to different regions of the page, to improve the overall print quality. In particular, smooth areas are halftoned with a low-frequency screen to provide more stable printing; whereas detail areas are halftoned with a high-frequency screen, since this will better reproduce the object detail. In this case, the object detail also serves to mask any print defects that arise from the use of a high frequency screen. These regions are defined by the initial object map, which is translated from the page description language (PDL). However, the information of object type obtained from the PDL may be incorrect. Some smooth areas may be labeled as raster causing them to be halftoned with a high frequency screen, rather than being labeled as vector, which would result in them being rendered with a low frequency screen. To correct the misclassification, we propose an object map correction algorithm that combines information from the incorrect object map with information obtained by segmentation of the continuous-tone RGB rasterized page image. Finally, the rendered image can be halftoned by the object-oriented halftoning approach, based on the corrected object map. Preliminary experimental results indicate the benefits of our algorithm combined with the new imaging pipeline, in terms of correction of misclassification errors. ^ In Chapter 3, we describe a study to understand image graininess. With the emergence of the high-end digital printing technologies, it is of interest to analyze the nature and causes of image graininess in order to understand the factors that prevent high-end digital presses from achieving the same print quality as commercial offset presses. We want to understand how image graininess relates to the halftoning technology and marking technology. This chapter provides three different approaches to understand image graininess. First, we perform a Fourier-based analysis of regular and irregular periodic, clustered-dot halftone textures. With high-end digital printing technology, irregular screens can be considered since they can achieve a better approximation to the screen sets used for commercial offset presses. This is due to the fact that the elements of the periodicity matrix of an irregular screen are rational numbers, rather than integers, which would be the case for a regular screen. From the analytical results, we show that irregular halftone textures generate new frequency components near the spectrum origin; and these frequency components are low enough to be visible to the human viewer. However, regular halftone textures do not have these frequency components. In addition, we provide a metric to measure the nonuniformity of a given halftone texture. The metric indicates that the nonuniformity of irregular halftone textures is higher than the nonuniformity of regular halftone textures. Furthermore, a method to visualize the nonuniformity of given halftone textures is described. The analysis shows that irregular halftone textures are grainier than regular halftone textures. Second, we analyze the regular and irregular periodic, clustered-dot halftone textures by calculating three spatial statistics. First, the disparity between lattice points generated by the periodicity matrix, and centroids of dot clusters are considered. Next, the area of dot clusters in regular and irregular halftone textures is considered. Third, the compactness of dot clusters in the regular and irregular halftone textures is calculated. The disparity of between centroids of irregular dot clusters and lattices points generated by the irregular screen is larger than the disparity of between centroids of regular dot clusters and lattices points generated by the regular screen. Irregular halftone textures have higher variance in the histogram of dot-cluster area. In addition, the compactness measurement shows that irregular dot clusters are less compact than regular dot clusters. But, a clustered-dot halftone algorithm wants to produce clustered-dot as compact as possible. Lastly, we exam the current marking technology by printing the same halftone pattern on different substrates, glossy and polyester media. The experimental results show that the current marking technology provides better print quality on glossy media than on polyester media. With above three different approaches, we conclude that the current halftoning technology introduces image graininess in the spatial domain because of the non-integer elements in the periodicity matrix of the irregular screen and the finite addressability of the marking engine. In addition, the geometric characteristics of irregular dot clusters is more irregular than the geometric characteristics of regular dot clusters. Finally, the marking technology provides inconsistency of print quality between substrates

    Estimating toner usage with laser electrophotographic printers, and object map generation from raster input image

    Get PDF
    Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. In Part 1, we propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our two-stage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.^ In Part 2, we describe a raster-input-based object map generation algorithm (OMGA) for laser, electrophotographic (EP) printers. The object map is utilized in the object-oriented halftoning approach, where different halftone screens and color maps are applied to different types of objects on the page in order to improve the overall printing quality. The OMGA generates object map from the raster input directly. It solves problems such as the object map obtained from the page description language (PDL) is incorrect, and an initial object map is unavailable from the processing pipeline. A new imaging pipeline for the laser EP printer incorporating both the OMGA and the object-oriented halftoning approach is proposed. The OMGA is a segmentation-based classification approach. It first detects objects according to the edge information, and then classifies the objects by analyzing the feature values extracted from the contour and the interior of each object. The OMGA is designed to be hardware-friendly, and can be implemented within two passes through the input document

    A User Oriented Image Retrieval System using Halftoning BBTC

    Get PDF
    The objective of this paper is to develop a system for content based image retrieval (CBIR) by accomplishing the benefits of low complexity Ordered Dither Block Truncation Coding based on half toning technique for the generation of image content descriptor. In the encoding step ODBTC compresses an image block into corresponding quantizes and bitmap image. Two image features are proposed to index an image namely co-occurrence features and bitmap patterns which are generated using ODBTC encoded data streams without performing the decoding process. The CCF and BPF of an image are simply derived from the two quantizes and bitmap respectively by including visual codebooks. The proposed system based on block truncation coding image retrieval method is not only convenient for an image compression but it also satisfy the demands of users by offering effective descriptor to index images in CBIR system

    Feature-sensitive and Adaptive Image Triangulation: A Super-pixel-based Scheme for Image Segmentation and Mesh Generation

    Get PDF
    With increasing utilization of various imaging techniques (such as CT, MRI and PET) in medical fields, it is often in great need to computationally extract the boundaries of objects of interest, a process commonly known as image segmentation. While numerous approaches have been proposed in literature on automatic/semi-automatic image segmentation, most of these approaches are based on image pixels. The number of pixels in an image can be huge, especially for 3D imaging volumes, which renders the pixel-based image segmentation process inevitably slow. On the other hand, 3D mesh generation from imaging data has become important not only for visualization and quantification but more critically for finite element based numerical simulation. Traditionally image-based mesh generation follows such a procedure as: (1) image boundary segmentation, (2) surface mesh generation from segmented boundaries, and (3) volumetric (e.g., tetrahedral) mesh generation from surface meshes. These three majors steps have been commonly treated as separate algorithms/steps and hence image information, once segmented, is not considered any more in mesh generation. In this thesis, we investigate a super-pixel based scheme that integrates both image segmentation and mesh generation into a single method, making mesh generation truly an image-incorporated approach. Our method, called image content-aware mesh generation, consists of several main steps. First, we generate a set of feature-sensitive, and adaptively distributed points from 2D grayscale images or 3D volumes. A novel image edge enhancement method via randomized shortest paths is introduced to be an optional choice to generate the features’ boundary map in mesh node generation step. Second, a Delaunay-triangulation generator (2D) or tetrahedral mesh generator (3D) is then utilized to generate a 2D triangulation or 3D tetrahedral mesh. The generated triangulation (or tetrahedralization) provides an adaptive partitioning of a given image (or volume). Each cluster of pixels within a triangle (or voxels within a tetrahedron) is called a super-pixel, which forms one of the nodes of a graph and adjacent super-pixels give an edge of the graph. A graph-cut method is then applied to the graph to define the boundary between two subsets of the graph, resulting in good boundary segmentations with high quality meshes. Thanks to the significantly reduced number of elements (super-pixels) as compared to that of pixels in an image, the super-pixel based segmentation method has tremendously improved the segmentation speed, making it feasible for real-time feature detection. In addition, the incorporation of image segmentation into mesh generation makes the generated mesh well adapted to image features, a desired property known as feature-preserving mesh generation

    Floating Points: A Method for Computing Stipple Drawings

    Full text link

    Digital Image Segmentation and On–line Print Quality Diagnostics

    Get PDF
    During the electrophotographic (EP) process for a modern laser printer, object-oriented halftoning is sometimes used which renders an input raster page with different halftone screen frequencies according to an object map; this approach can reduce the print artifacts for the smooth areas as well as preserve the fine details of a page. Object map can be directly extracted from the page description language (PDL), but most of the time, it is not correctly generated. For the first part of this thesis, we introduce a new object generation algorithm that generates an object map from scratch purely based on a raster image. The algorithm is intended for ASIC application. To achieve hardware friendliness and memory efficiency, the algorithm only buffers two strips of an image at a time for processing. A novel two-pass connected component algorithm is designed that runs through all the pixels in raster order, collect features and classify components on the fly, and recycle unused components to save memories for future strips. The algorithm is finally implemented as a C program. For 10 test pages, with the similar quality of object maps generated, the number of connected components used can be reduced by over 97% on average compared to the classic two-pass connected component which buffers a whole page of pixels. The novelty of the connected component algorithm used here for document segmentation can also be potentially used for wide variety of other applications. The second part of the thesis proposes a new way to diagnose print quality. Compared to the traditional diagnostics of print quality which prints a specially designed test page to be examined by an expert or against a user manual, our proposed system could automatically diagnose a customer’s printer without any human interference. The system relies on scanning printouts from user’s printer. Print defects such as banding, streaking, etc. will be reflected on its scanned page and can be captured by comparing to its master image; the master image is the digitally generated original from which the page is printed. Once the print quality drops below a specified acceptance criteria level, the system can notify a user of the presence of print quality issues. Among so many print defects, color fading – caused by the low toner in the cartridge – is the focus of this work. Our image processing pipeline first uses a feature based image registration algorithm to align the scanned page with the master page spatially and then calculates the color difference of different color clusters between the scanned page and the master page. At last, it will predict which cartridge is depleted

    Rethinking PRL: A Multiscale Progressively Residual Learning Network for Inverse Halftoning

    Full text link
    Image inverse halftoning is a classic image restoration task, aiming to recover continuous-tone images from halftone images with only bilevel pixels. Because the halftone images lose much of the original image content, inverse halftoning is a classic ill-problem. Although existing inverse halftoning algorithms achieve good performance, their results lose image details and features. Therefore, it is still a challenge to recover high-quality continuous-tone images. In this paper, we propose an end-to-end multiscale progressively residual learning network (MSPRL), which has a UNet architecture and takes multiscale input images. To make full use of different input image information, we design a shallow feature extraction module to capture similar features between images of different scales. We systematically study the performance of different methods and compare them with our proposed method. In addition, we employ different training strategies to optimize the model, which is important for optimizing the training process and improving performance. Extensive experiments demonstrate that our MSPRL model obtains considerable performance gains in detail restoration

    A new adaptive edge enhancement algorithm for color laser printers

    Get PDF
    This thesis presents a novel algorithm for improving quality of edges in printed text. The algorithm is designed to add pixels at selected edge locations after halftoning. The extent of the correction is proportional to the “strength” of the edge, as determined by comparing the local differences in a four-pixel neighborhood to a dynamically generated threshold. The process is computationally efficient and requires minimal memory resources. The performance of our proposed algorithm is clearly demonstrated on several characters and lines. While the algorithm aims to improve the quality of printed text (edges), it is possible to extend its application to improvement of any edge identifiable in an image document

    Wholetoning: Synthesizing Abstract Black-and-White Illustrations

    Get PDF
    Black-and-white imagery is a popular and interesting depiction technique in the visual arts, in which varying tints and shades of a single colour are used. Within the realm of black-and-white images, there is a set of black-and-white illustrations that only depict salient features by ignoring details, and reduce colour to pure black and white, with no intermediate tones. These illustrations hold tremendous potential to enrich decoration, human communication and entertainment. Producing abstract black-and-white illustrations by hand relies on a time consuming and difficult process that requires both artistic talent and technical expertise. Previous work has not explored this style of illustration in much depth, and simple approaches such as thresholding are insufficient for stylization and artistic control. I use the word wholetoning to refer to illustrations that feature a high degree of shape and tone abstraction. In this thesis, I explore computer algorithms for generating wholetoned illustrations. First, I offer a general-purpose framework, “artistic thresholding”, to control the generation of wholetoned illustrations in an intuitive way. The basic artistic thresholding algorithm is an optimization framework based on simulated annealing to get the final bi-level result. I design an extensible objective function from our observations of a lot of wholetoned images. The objective function is a weighted sum over terms that encode features common to wholetoned illustrations. Based on the framework, I then explore two specific wholetoned styles: papercutting and representational calligraphy. I define a paper-cut design as a wholetoned image with connectivity constraints that ensure that it can be cut out from only one piece of paper. My computer generated papercutting technique can convert an original wholetoned image into a paper-cut design. It can also synthesize stylized and geometric patterns often found in traditional designs. Representational calligraphy is defined as a wholetoned image with the constraint that all depiction elements must be letters. The procedure of generating representational calligraphy designs is formalized as a “calligraphic packing” problem. I provide a semi-automatic technique that can warp a sequence of letters to fit a shape while preserving their readability
    • …
    corecore