207 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Visual-Based error diffusion for printers

    Get PDF
    An approach for halftoning is presented that incorporates a printer model and also explicitly uses the human visual model. Conventional methods, such as clustered-dot screening or dispersed-dot screening, do not solve the gray-level distortion of printers and just implicitly use the eye as a lowpass filter. Error diffusion accounts for errors when processing subsequent pixels to minimize the overall mean-square errors. Recently developed model-based halftoning technique eliminates the effect of printer luminance distortion, but this method does not consider the filtering action of the eye, that is, some artifacts of standard error diffusion still exist when the printing resolution and view distance change. Another visual error diffusion method incorporates the human visual filter into error diffusion and results in improved noise characteristics and better resolution for structured image regions, but gray levels are still distorted. Experiments prove that human viewers judge the quality of a halftoning image based mainly on the region which exhibits the worst local error, and low-frequency distortions introduced by the halftoning process are responsible for more visually annoying artifacts in the halftone image than high-frequency distortion. Consequently, we adjust the correction factors of the feedback filter by local characteristics and adjust the dot patterns for some gray levels to minimize the visual blurred local error. Based on the human visual model, we obtain the visual-based error diffusion algorithm, and further we will also incorporate the printer model to correct the printing distortion. The artifacts connected with standard error diffusion are expected to be eliminated or decreased and therefore better print quality should be achieved. In addition to qualitative analysis, we also introduce a subjective evaluation of algorithms. The tests show that the algorithms developed here have improved the performance of error diffusion for printers

    Analysis of random halftone dithering using second order statistics

    Get PDF
    An analytical approach is proposed to explain the appearance of unwanted low frequency artifacts during the random dithering halftoning process. The solution uses a theorem which relates the correlation of the input gray level (continuous) signal to the correlation of the output (halftone) binary signal. The numerical solution of the above relationship suggests that: 1. Introduction of low frequency artifacts is inevitable. 2. The effect is enhanced for mean gray levels farther from mid-gray. 3. High frequency information in the input signal is attenuated more than low frequency information

    Separating a Real-Life Nonlinear Image Mixture

    Get PDF
    When acquiring an image of a paper document, the image printed on the back page sometimes shows through. The mixture of the front- and back-page images thus obtained is markedly nonlinear, and thus constitutes a good real-life test case for nonlinear blind source separation. This paper addresses a difficult version of this problem, corresponding to the use of "onion skin" paper, which results in a relatively strong nonlinearity of the mixture, which becomes close to singular in the lighter regions of the images. The separation is achieved through the MISEP technique, which is an extension of the well known INFOMAX method. The separation results are assessed with objective quality measures. They show an improvement over the results obtained with linear separation, but have room for further improvement

    Perceptual error optimization for Monte Carlo rendering

    Full text link
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods

    FPGA BASED PARALLEL IMPLEMENTATION OF STACKED ERROR DIFFUSION ALGORITHM

    Get PDF
    Digital halftoning is a crucial technique used in digital printers to convert a continuoustone image into a pattern of black and white dots. Halftoning is used since printers have a limited availability of inks and cannot reproduce all the color intensities in a continuous image. Error Diffusion is an algorithm in halftoning that iteratively quantizes pixels in a neighborhood dependent fashion. This thesis focuses on the development and design of a parallel scalable hardware architecture for high performance implementation of a high quality Stacked Error Diffusion algorithm. The algorithm is described in ‘C’ and requires a significant processing time when implemented on a conventional CPU. Thus, a new hardware processor architecture is developed to implement the algorithm and is implemented to and tested on a Xilinx Virtex 5 FPGA chip. There is an extraordinary decrease in the run time of the algorithm when run on the newly proposed parallel architecture implemented to FPGA technology compared to execution on a single CPU. The new parallel architecture is described using the Verilog Hardware Description Language. Post-synthesis and post-implementation, performance based Hardware Description Language (HDL), simulation validation of the new parallel architecture is achieved via use of the ModelSim CAD simulation tool

    Perceptual Error Optimization for {Monte Carlo} Rendering

    Get PDF
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods

    Threshold modulation in 1-D error diffusion

    Get PDF
    Error diffusion (ED) is widely used in digital imaging as a binarization process which preserves fine detail and results in pleasant images. The process resembles the human visual system in that it exhibits an intrinsic edge enhancement An additional extrinsic edge enhancement can be controlled by varying the threshold. None of these characteristics has yet been fully explained due to the lack of a suitable mathematical model of the algorithm. The extrinsic sharpening introduced in 1-D error diffusion is the subject of this thesis. An available pulse density modulation(PDM) model generated from a frequency modulation is used to gain a better understanding of variables in ED. As a result, threshold variation fits the model as an additional phase modulation. Equivalent images are obtained by applying ED with threshold modulation or by preprocessing an image with an appropriate convolution mask and subsequently running standard ED
    • …
    corecore