152 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Restoration of halftoned color-quantized images using linear estimator

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information EngineeringRefereed conference paper2006-2007 > Academic research: refereed > Refereed conference paperVersion of RecordPublishe

    Descreening of Color Halftone Images in the Frequency Domain

    Get PDF
    Scanning a halftone image introduces halftone artifacts, known as Moiré patterns, which significantly degrade the image quality. Printers that use amplitude modulation (AM) screening for halftone printing position dots in a periodic pattern. Therefore, frequencies relating halftoning are easily identifiable in the frequency domain. This paper proposes a method for descreening scanned color halftone images using a custom band reject filter designed to isolate and remove only the frequencies related to halftoning while leaving image edges sharp without image segmentation or edge detection. To enable hardware acceleration, the image is processed in small overlapped windows. The windows are filtered individually in the frequency domain, then pieced back together in a method that does not show blocking artifacts

    Media processor implementations of image rendering algorithms

    Get PDF
    Demands for fast execution of image processing are a driving force for today\u27s computing market. Many image processing applications require intense numeric calculations to be done on large sets of data with minimal overhead time. To meet this challenge, several approaches have been used. Custom-designed hardware devices are very fast implementations used in many systems today. However, these devices are very expensive and inflexible. General purpose computers with enhanced multimedia instructions offer much greater flexibility but process data at a much slower rate than the custom-hardware devices. Digital signal processors (DSP\u27s) and media processors, such as the MAP-CA created by Equator Technologies, Inc., may be an efficient alternative that provides a low-cost combination of speed and flexibility. Today, DSP\u27s and media processors are commonly used in image and video encoding and decoding, including JPEG and MPEG processing techniques. Little work has been done to determine how well these processors can perform other image process ing techniques, specifically image rendering for printing. This project explores various image rendering algorithms and the performance achieved by running them on a me dia processor to determine if this type of processor is a viable competitor in the image rendering domain. Performance measurements obtained when implementing rendering algorithms on the MAP-CA show that a 4.1 speedup can be achieved with neighborhood-type processes, while point-type processes achieve an average speedup of 21.7 as compared to general purpose processor implementations

    A POCS-based restoration algorithm for restoring halftoned color-quantized images

    Get PDF
    Centre for Multimedia Signal Processing, Department of Electronic and Information Engineering2006-2007 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    New methods for digital halftoning and inverse halftoning

    Get PDF
    Halftoning is the rendition of continuous-tone pictures on bi-level displays. Here we first review some of the halftoning algorithms which have a direct bearing on our paper and then describe some of the more recent advances in the field. Dot diffusion halftoning has the advantage of pixel-level parallelism, unlike the popular error diffusion halftoning method. We first review the dot diffusion algorithm and describe a recent method to improve its image quality by taking advantage of the Human Visual System function. Then we discuss the inverse halftoning problem: The reconstruction of a continuous tone image from its halftone. We briefly review the methods for inverse halftoning, and discuss the advantages of a recent algorithm, namely, the Look Up Table (LUT)Method. This method is extremely fast and achieves image quality comparable to that of the best known methods. It can be applied to any halftoning scheme. We then introduce LUT based halftoning and tree-structured LUT (TLUT)halftoning. We demonstrate how halftone image quality in between that of error diffusion and Direct Binary Search (DBS)can be achieved depending on the size of tree structure in TLUT algorithm while keeping the complexity of the algorithm much lower than that of DBS

    Novel methods in image halftoning

    Get PDF
    Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Science, Bilkent Univ., 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 97-101Halftoning refers to the problem of rendering continuous-tone (contone) images on display and printing devices which are capable of reproducing only a limited number of colors. A new adaptive halftoning method using the adaptive QR- RLS algorithm is developed for error diffusion which is one of the halftoning techniques. Also, a diagonal scanning strategy to exploit the human visual system properties in processing the image is proposed. Simulation results on color images demonstrate the superior quality of the new method compared to the existing methods. Another problem studied in this thesis is inverse halftoning which is the problem of recovering a contone image from a given halftoned image. A novel inverse halftoning method is developed for restoring a contone image from the halftoned image. A set theoretic formulation is used where sets are defined using the prior information about the problem. A new space domain projection is introduced assuming the halftoning is performed ,with error diffusion, and the error diffusion filter kernel is known. The space domain, frequency domain, and space-scale domain projections are used alternately to obtain a feasible solution for the inverse halftoning problem which does not have a unique solution. Simulation results for both grayscale and color images give good results, and demonstrate the effectiveness of the proposed inverse halftoning method.Bozkurt, GözdeM.S

    Taming Reversible Halftoning via Predictive Luminance

    Full text link
    Traditional halftoning usually drops colors when dithering images with binary dots, which makes it difficult to recover the original color information. We proposed a novel halftoning technique that converts a color image into a binary halftone with full restorability to its original version. Our novel base halftoning technique consists of two convolutional neural networks (CNNs) to produce the reversible halftone patterns, and a noise incentive block (NIB) to mitigate the flatness degradation issue of CNNs. Furthermore, to tackle the conflicts between the blue-noise quality and restoration accuracy in our novel base method, we proposed a predictor-embedded approach to offload predictable information from the network, which in our case is the luminance information resembling from the halftone pattern. Such an approach allows the network to gain more flexibility to produce halftones with better blue-noise quality without compromising the restoration quality. Detailed studies on the multiple-stage training method and loss weightings have been conducted. We have compared our predictor-embedded method and our novel method regarding spectrum analysis on halftone, halftone accuracy, restoration accuracy, and the data embedding studies. Our entropy evaluation evidences our halftone contains less encoding information than our novel base method. The experiments show our predictor-embedded method gains more flexibility to improve the blue-noise quality of halftones and maintains a comparable restoration quality with a higher tolerance for disturbances.Comment: to be published in IEEE Transactions on Visualization and Computer Graphic

    Estimating toner usage with laser electrophotographic printers, and object map generation from raster input image

    Get PDF
    Accurate estimation of toner usage is an area of on-going importance for laser, electrophotographic (EP) printers. In Part 1, we propose a new two-stage approach in which we first predict on a pixel-by-pixel basis, the absorptance from printed and scanned pages. We then form a weighted sum of these pixel values to predict overall toner usage on the printed page. The weights are chosen by least-squares regression to toner usage measured with a set of printed test pages. Our two-stage predictor significantly outperforms existing methods that are based on a simple pixel counting strategy in terms of both accuracy and robustness of the predictions.^ In Part 2, we describe a raster-input-based object map generation algorithm (OMGA) for laser, electrophotographic (EP) printers. The object map is utilized in the object-oriented halftoning approach, where different halftone screens and color maps are applied to different types of objects on the page in order to improve the overall printing quality. The OMGA generates object map from the raster input directly. It solves problems such as the object map obtained from the page description language (PDL) is incorrect, and an initial object map is unavailable from the processing pipeline. A new imaging pipeline for the laser EP printer incorporating both the OMGA and the object-oriented halftoning approach is proposed. The OMGA is a segmentation-based classification approach. It first detects objects according to the edge information, and then classifies the objects by analyzing the feature values extracted from the contour and the interior of each object. The OMGA is designed to be hardware-friendly, and can be implemented within two passes through the input document
    corecore