9 research outputs found

    A novel algorithm for color quantization by 3D diffusion

    Get PDF
    Author name used in this publication: Y. H. ChanRefereed conference paper2002-2003 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Contextual algorithm for color quantization

    Full text link

    Media processor implementations of image rendering algorithms

    Get PDF
    Demands for fast execution of image processing are a driving force for today\u27s computing market. Many image processing applications require intense numeric calculations to be done on large sets of data with minimal overhead time. To meet this challenge, several approaches have been used. Custom-designed hardware devices are very fast implementations used in many systems today. However, these devices are very expensive and inflexible. General purpose computers with enhanced multimedia instructions offer much greater flexibility but process data at a much slower rate than the custom-hardware devices. Digital signal processors (DSP\u27s) and media processors, such as the MAP-CA created by Equator Technologies, Inc., may be an efficient alternative that provides a low-cost combination of speed and flexibility. Today, DSP\u27s and media processors are commonly used in image and video encoding and decoding, including JPEG and MPEG processing techniques. Little work has been done to determine how well these processors can perform other image process ing techniques, specifically image rendering for printing. This project explores various image rendering algorithms and the performance achieved by running them on a me dia processor to determine if this type of processor is a viable competitor in the image rendering domain. Performance measurements obtained when implementing rendering algorithms on the MAP-CA show that a 4.1 speedup can be achieved with neighborhood-type processes, while point-type processes achieve an average speedup of 21.7 as compared to general purpose processor implementations

    Contextual algorithm for color quantization

    Get PDF
    2003-2004 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    A New framework for an electrophotographic printer model

    Get PDF
    Digital halftoning is a printing technology that creates the illusion of continuous tone images for printing devices such as electrophotographic printers that can only produce a limited number of tone levels. Digital halftoning works because the human visual system has limited spatial resolution which blurs the printed dots of the halftone image, creating the gray sensation of a continuous tone image. Because the printing process is imperfect it introduces distortions to the halftone image. The quality of the printed image depends, among other factors, on the complex interactions between the halftone image, the printer characteristics, the colorant, and the printing substrate. Printer models are used to assist in the development of new types of halftone algorithms that are designed to withstand the effects of printer distortions. For example, model-based halftone algorithms optimize the halftone image through an iterative process that integrates a printer model within the algorithm. The two main goals of a printer model are to provide accurate estimates of the tone and of the spatial characteristics of the printed halftone pattern. Various classes of printer models, from simple tone calibrations, to complex mechanistic models, have been reported in the literature. Existing models have one or more of the following limiting factors: they only predict tone reproduction, they depend on the halftone pattern, they require complex calibrations or complex calculations, they are printer specific, they reproduce unrealistic dot structures, and they are unable to adapt responses to new data. The two research objectives of this dissertation are (1) to introduce a new framework for printer modeling and (2) to demonstrate the feasibility of such a framework in building an electrophotographic printer model. The proposed framework introduces the concept of modeling a printer as a texture transformation machine. The basic premise is that modeling the texture differences between the output printed images and the input images encompasses all printing distortions. The feasibility of the framework was tested with a case study modeling a monotone electrophotographic printer. The printer model was implemented as a bank of feed-forward neural networks, each one specialized in modeling a group of textural features of the printed halftone pattern. The textural features were obtained using a parametric representation of texture developed from a multiresolution decomposition proposed by other researchers. The textural properties of halftone patterns were analyzed and the key texture parameters to be modeled by the bank were identified. Guidelines for the multiresolution texture decomposition and the model operational parameters and operational limits were established. A method for the selection of training sets based on the morphological properties of the halftone patterns was also developed. The model is fast and has the capability to continue to learn with additional training. The model can be easily implemented because it only requires a calibrated scanner. The model was tested with halftone patterns representing a range of spatial characteristics found in halftoning. Results show that the model provides accurate predictions for the tone and the spatial characteristics when modeling halftone patterns individually and it provides close approximations when modeling multiple halftone patterns simultaneously. The success of the model justifies continued research of this new printer model framework

    Optimal Dither and Noise Shaping in Image Processing

    Get PDF
    Dithered quantization and noise shaping is well known in the audio community. The image processing community seems to be aware of this same theory only in bits and pieces, and frequently under conflicting terminology. This thesis attempts to show that dithered quantization of images is an extension of dithered quantization of audio signals to higher dimensions. Dithered quantization, or ``threshold modulation'', is investigated as a means of suppressing undesirable visual artifacts during the digital quantization, or requantization, of an image. Special attention is given to the statistical moments of the resulting error signal. Afterwards, noise shaping, or ``error diffusion'' methods are considered to try to improve on the dithered quantization technique. We also take time to develop the minimum-phase property for two-dimensional systems. This leads to a natural extension of Jensen's Inequality and the Hilbert transform relationship between the log-magnitude and phase of a two-dimensional system. We then describe how these developments are relevant to image processing

    Scanline calculation of radial influence for image processing

    Full text link
    Efficient methods for the calculation of radial influence are described and applied to two image processing problems, digital halftoning and mixed content image compression. The methods operate recursively on scanlines of image values, spreading intensity from scanline to scanline in proportions approximating a Cauchy distribution. For error diffusion halftoning, experiments show that this recursive scanline spreading provides an ideal pattern of distribution of error. Error diffusion using masks generated to provide this distribution of error alleviate error diffusion "worm" artifacts. The recursive scanline by scanline application of a spreading filter and a complementary filter can be used to reconstruct an image from its horizontal and vertical pixel difference values. When combined with the use of a downsampled image the reconstruction is robust to incomplete and quantized pixel difference data. Such gradient field integration methods are described in detail proceeding from representation of images by gradient values along contours through to a variety of efficient algorithms. Comparisons show that this form of gradient field integration by convolution provides reduced distortion compared to other high speed gradient integration methods. The reduced distortion can be attributed to success in approximating a radial pattern of influence. An approach to edge-based image compression is proposed using integration of gradient data along edge contours and regularly sampled low resolution image data. This edge-based image compression model is similar to previous sketch based image coding methods but allows a simple and efficient calculation of an edge-based approximation image. A low complexity implementation of this approach to compression is described. The implementation extracts and represents gradient data along edge contours as pixel differences and calculates an approximate image by performing integration of pixel difference data by scanline convolution. The implementation was developed as a prototype for compression of mixed content image data in printing systems. Compression results are reported and strengths and weaknesses of the implementation are identified

    <title>Evolution of error diffusion</title>

    No full text
    corecore