120 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Simulation of an electrophotographic halftone reproduction

    Get PDF
    The robustness of three digital halftoning techniques are simulated for a hypothetical electrophotographic laser printer subjected to dynamic environmental conditions over a copy run of one thousand images. Mathematical electrophotographic models have primarily concentrated on solid area reproductions under time-invariant conditions. The models used in this study predict the behavior of complex image distributions at various stages in the electrophotographic process. The system model is divided into seven subsystems: Halftoning, Laser Exposure, Photoconductor Discharge, Toner Development, Transfer, Fusing, and Image Display. Spread functions associated with laser spot intensity, charge migration, and toner transfer and fusing are used to predict the electrophotographic system response for continuous and halftone reproduction. Many digital halftoning techniques have been developed for converting from continuous-tone to binary (halftone) images. The general objective of halftoning is to approximate the intermediate gray levels of continuous tone images with a binary (black-and-white) imaging system. Three major halftoning techniques currently used are Ordered-Dither, Cluster-Dot, and Error Diffusion. These halftoning algorithms are included in the simulation model. Simulation in electrophotography can be used to better understand the relationship between electrophotographic parameters and image quality, and to observe the effects of time-variant degradation on electrophotographic parameters and materials. Simulation programs, written in FORTRAN and SLAM (Simulation Language Alternative Modeling), have been developed to investigate the effects of system degradation on halftone image quality. The programs have been designed for continuous simulation to characterize the behavior or condition of the electrophotographic system. The simulation language provides the necessary algorithms for obtaining values for the variables described by the time-variant equations, maintaining a history of values during the simulation run, and reporting statistical information on time-dependent variables. Electrophotographic variables associated with laser intensity, initial photoconductor surface voltage, and residual voltage are degraded over a simulated run of one thousand copies. These results are employed to predict the degraded electrophotographic system response and to investigate the behavior of the various halftone techniques under dynamic system conditions. Two techniques have been applied to characterize halftone image quality: Tone Reproduction Curves are used to characterize and record the tone reproduction capability of an electrophotographic system over a simulated copy run. Density measurements are collected and statistical inferences drawn using SLAM. Typically the sharpness of an image is characterized by a system modulation transfer function (MTF). The mathematical models used to describe the subsystem transforms of an electrophotographic system involve non-linear functions. One means for predicting this non-linear system response is to use a Chirp function as the input to the model and then to compare the reproduced modulation to that of the original. Since the imaging system is non-linear, the system response cannot be described by an MTF, but rather an Input Response Function. This function was used to characterize the robustness of halftone patterns at various frequencies. Simulated images were also generated throughout the simulation run and used to evaluate image sharpness and resolution. The data, generated from each of the electrophotographic simulation models, clearly indicates that image stability and image sharpness is not influenced by dot orientation, but rather by the type of halftoning operation used. Error-Diffusion is significantly more variable than Clustered-Dot and Dispersed-Dot at low to mid densities. However, Error-Diffusion is significantly less variable than the ordered dither patterns at high densities. Also, images generated from Error-Diffusion are sharper than those generated using Clustered-Dot and Dispersed-Dot techniques, but the resolution capability of each of the techniques remained the same and degraded equally for each simulation run

    Perceptual error optimization for Monte Carlo rendering

    Full text link
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods

    Perceptual Error Optimization for {Monte Carlo} Rendering

    Get PDF
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods

    Bayesian Dictionary Learning for Single and Coupled Feature Spaces

    Get PDF
    Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems. Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains

    Black-box printer models and their applications

    Get PDF
    In the electrophotographic printing process, the deposition of toner within the area of a given printer addressable pixel is strongly influenced by the values of its neighboring pixels. The interaction between neighboring pixels, which is commonly referred to as dot-gain, is complicated. The printer models which are developed according to a pre-designed test page can either be embedded in the halftoning algorithm, or used to predict the printed halftone image at the input to an algorithm being used to assess print quality. In our research, we examine the potential influence of a larger neighborhood (45?45) of the digital halftone image on the measured value of a printed pixel at the center of that neighborhood by introducing a feasible strategy for the contribution. We developed a series of six models with different accuracy and computational complexity to account for local neighborhood effects and the influence of a 45?45 neighborhood of pixels on the central printer-addressable pixel tone development. All these models are referred to as Black Box Model (BBM) since they are based solely on measuring what is on the printed page, and do not incorporate any information about the marking process itself. We developed two different types of printer models Standard Definition (SD) BBM and High Definition (HD) BBM with capture device Epson Expression 10000XL (Epson America, Inc., Long Beach, CA, USA) flatbed scanner operated at 2400 dpi under different analysis resolutions. The experiment results show that the larger neighborhood models yield a significant improvement in the accuracy of the prediction of the pixel values of the printed halftone image. The sample function generation black box model (SFG-BBM) is an extension of SD-BBM that adds the printing variation to the mean prediction to improve the prediction by more accurately matching the characteristics of the actual printed image. We also followed a structure similar to that used to develop our series of BBMs to develop a two-stage toner usage predictor for electrophotographic printers. We first obtained on a pixel-by-pixel basis, the predicted absorptance of printed and scanned page with the digital input using BBM. We then form a weighted sum of these predicted pixel values to predict overall toner usage on the printed page. Our two-stage predictor significantly outperforms existing method that is based on a simple pixel counting strategy, in terms of both accuracy and robustness of the prediction

    The development of the toner density sensor for closed-loop feedback laser printer calibration

    Get PDF
    A new infrared (IR) sensor was developed for application in closed-loop feedback printer calibration as it relates to monochrome (black toner only) laser printers. The toner density IR sensor (TDS) was introduced in the early 1980’s; however, due to cost and limitation of technologies at the time, implementation was not accomplished until within the past decade. Existing IR sensor designs do not discuss/address: • EMI (electromagnetic interference) effects on the sensor due to EP (electrophotography) components • Design considerations for environmental conditions • Sensor response time as it affects printer process speed The toner density sensor (TDS) implemented in the Lexmark E series printer reduces these problems and eliminates the use of the current traditional “open-loop” (meaning feedback are parameters not directly affecting print darkness such as page count, toner level, etc.) calibration process where print darkness is adjusted using previously calculated and stored EP process parameters. The historical process does not have the ability to capture cartridge component variation and environmental changes which affect print darkness variation. The TDS captures real time data which is used to calculate EP process parameters for the adjustment of print darkness; as a result, greatly reducing variations uncontrolled by historical printer calibration. Specifically, the first and primary purpose of this research is to reduce print darkness variation using the TDS. The second goal is to mitigate the TDS EMI implementation issue for reliable data accuracy
    • …
    corecore