232 research outputs found

    An importance driven genetic algorithm for the halftoning process

    Get PDF
    Most evolutionary approaches to halftoning techniques have been concerned with the paramount goal of halftoning: achieving an accurate reproduction of local grayscale intensities while avoiding the introduction of artifacts. A secondary concern in halftoning has been the preservation of edges in the halftoned image. In this paper, we will introduce a new evolutionary approach through the use of an importance function. This approach has at least two main characteristics. First, it can produce results similar to many other halftoning techniques. Second, if the chosen importance function is accordingly changed, areas of the image with high variance can be highlighted.III Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Colour importance driven half-toning

    Get PDF
    Importance driven half-toning (IDH) presents a method for reduced resource rendering of gray scale images. This approach uses an image pyramid to distribute drawing primitives to different areas of the image based on an importance function. The original paper showed how a wide variety of effects can be achieved using a combination of importance functions and drawing primitives. In this paper we show that by storing multiple pyramid representations of the image we can use different geometric primitives in different areas of the image. The application of IDH to colour is then explored. We show that each of the red, green, and blue bands of the image can be used as bases for IDH. We show some images that are obtained in this manner. Finally we introduce a drawing mode based on a teaching drawing style (Esgrafiado with black ink) where a black layer is scratched off a coloured background. The resulting system provides the artist with a large number of parameters with which to render stylized images. We conclude the paper with some examples of our technique being applied to standard images.Eje: ImágenesRed de Universidades con Carreras en Informática (RedUNCI

    Interactive Geometry Remeshing

    Get PDF
    We present a novel technique, both flexible and efficient, for interactive remeshing of irregular geometry. First, the original (arbitrary genus) mesh is substituted by a series of 2D maps in parameter space. Using these maps, our algorithm is then able to take advantage of established signal processing and halftoning tools that offer real-time interaction and intricate control. The user can easily combine these maps to create a control map – a map which controls the sampling density over the surface patch. This map is then sampled at interactive rates allowing the user to easily design a tailored resampling. Once this sampling is complete, a Delaunay triangulation and fast optimization are performed to perfect the final mesh. As a result, our remeshing technique is extremely versatile and general, being able to produce arbitrarily complex meshes with a variety of properties including: uniformity, regularity, semiregularity, curvature sensitive resampling, and feature preservation. We provide a high level of control over the sampling distribution allowing the user to interactively custom design the mesh based on their requirements thereby increasing their productivity in creating a wide variety of meshes

    A User Oriented Image Retrieval System using Halftoning BBTC

    Get PDF
    The objective of this paper is to develop a system for content based image retrieval (CBIR) by accomplishing the benefits of low complexity Ordered Dither Block Truncation Coding based on half toning technique for the generation of image content descriptor. In the encoding step ODBTC compresses an image block into corresponding quantizes and bitmap image. Two image features are proposed to index an image namely co-occurrence features and bitmap patterns which are generated using ODBTC encoded data streams without performing the decoding process. The CCF and BPF of an image are simply derived from the two quantizes and bitmap respectively by including visual codebooks. The proposed system based on block truncation coding image retrieval method is not only convenient for an image compression but it also satisfy the demands of users by offering effective descriptor to index images in CBIR system

    Perceptual error optimization for Monte Carlo rendering

    Full text link
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods

    An importance driven genetic algorithm for the halftoning process

    Get PDF
    Most evolutionary approaches to halftoning techniques have been concerned with the paramount goal of halftoning: achieving an accurate reproduction of local grayscale intensities while avoiding the introduction of artifacts. A secondary concern in halftoning has been the preservation of edges in the halftoned image. In this paper, we will introduce a new evolutionary approach through the use of an importance function. This approach has at least two main characteristics. First, it can produce results similar to many other halftoning techniques. Second, if the chosen importance function is accordingly changed, areas of the image with high variance can be highlighted.III Workshop de Computación Gráfica, Imágenes y Visualización (WCGIV)Red de Universidades con Carreras en Informática (RedUNCI

    Simulation of an electrophotographic halftone reproduction

    Get PDF
    The robustness of three digital halftoning techniques are simulated for a hypothetical electrophotographic laser printer subjected to dynamic environmental conditions over a copy run of one thousand images. Mathematical electrophotographic models have primarily concentrated on solid area reproductions under time-invariant conditions. The models used in this study predict the behavior of complex image distributions at various stages in the electrophotographic process. The system model is divided into seven subsystems: Halftoning, Laser Exposure, Photoconductor Discharge, Toner Development, Transfer, Fusing, and Image Display. Spread functions associated with laser spot intensity, charge migration, and toner transfer and fusing are used to predict the electrophotographic system response for continuous and halftone reproduction. Many digital halftoning techniques have been developed for converting from continuous-tone to binary (halftone) images. The general objective of halftoning is to approximate the intermediate gray levels of continuous tone images with a binary (black-and-white) imaging system. Three major halftoning techniques currently used are Ordered-Dither, Cluster-Dot, and Error Diffusion. These halftoning algorithms are included in the simulation model. Simulation in electrophotography can be used to better understand the relationship between electrophotographic parameters and image quality, and to observe the effects of time-variant degradation on electrophotographic parameters and materials. Simulation programs, written in FORTRAN and SLAM (Simulation Language Alternative Modeling), have been developed to investigate the effects of system degradation on halftone image quality. The programs have been designed for continuous simulation to characterize the behavior or condition of the electrophotographic system. The simulation language provides the necessary algorithms for obtaining values for the variables described by the time-variant equations, maintaining a history of values during the simulation run, and reporting statistical information on time-dependent variables. Electrophotographic variables associated with laser intensity, initial photoconductor surface voltage, and residual voltage are degraded over a simulated run of one thousand copies. These results are employed to predict the degraded electrophotographic system response and to investigate the behavior of the various halftone techniques under dynamic system conditions. Two techniques have been applied to characterize halftone image quality: Tone Reproduction Curves are used to characterize and record the tone reproduction capability of an electrophotographic system over a simulated copy run. Density measurements are collected and statistical inferences drawn using SLAM. Typically the sharpness of an image is characterized by a system modulation transfer function (MTF). The mathematical models used to describe the subsystem transforms of an electrophotographic system involve non-linear functions. One means for predicting this non-linear system response is to use a Chirp function as the input to the model and then to compare the reproduced modulation to that of the original. Since the imaging system is non-linear, the system response cannot be described by an MTF, but rather an Input Response Function. This function was used to characterize the robustness of halftone patterns at various frequencies. Simulated images were also generated throughout the simulation run and used to evaluate image sharpness and resolution. The data, generated from each of the electrophotographic simulation models, clearly indicates that image stability and image sharpness is not influenced by dot orientation, but rather by the type of halftoning operation used. Error-Diffusion is significantly more variable than Clustered-Dot and Dispersed-Dot at low to mid densities. However, Error-Diffusion is significantly less variable than the ordered dither patterns at high densities. Also, images generated from Error-Diffusion are sharper than those generated using Clustered-Dot and Dispersed-Dot techniques, but the resolution capability of each of the techniques remained the same and degraded equally for each simulation run

    Bayesian Dictionary Learning for Single and Coupled Feature Spaces

    Get PDF
    Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems. Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains

    Perceptual Error Optimization for {Monte Carlo} Rendering

    Get PDF
    Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods
    • …
    corecore