208 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Media processor implementations of image rendering algorithms

    Get PDF
    Demands for fast execution of image processing are a driving force for today\u27s computing market. Many image processing applications require intense numeric calculations to be done on large sets of data with minimal overhead time. To meet this challenge, several approaches have been used. Custom-designed hardware devices are very fast implementations used in many systems today. However, these devices are very expensive and inflexible. General purpose computers with enhanced multimedia instructions offer much greater flexibility but process data at a much slower rate than the custom-hardware devices. Digital signal processors (DSP\u27s) and media processors, such as the MAP-CA created by Equator Technologies, Inc., may be an efficient alternative that provides a low-cost combination of speed and flexibility. Today, DSP\u27s and media processors are commonly used in image and video encoding and decoding, including JPEG and MPEG processing techniques. Little work has been done to determine how well these processors can perform other image process ing techniques, specifically image rendering for printing. This project explores various image rendering algorithms and the performance achieved by running them on a me dia processor to determine if this type of processor is a viable competitor in the image rendering domain. Performance measurements obtained when implementing rendering algorithms on the MAP-CA show that a 4.1 speedup can be achieved with neighborhood-type processes, while point-type processes achieve an average speedup of 21.7 as compared to general purpose processor implementations

    Rethinking PRL: A Multiscale Progressively Residual Learning Network for Inverse Halftoning

    Full text link
    Image inverse halftoning is a classic image restoration task, aiming to recover continuous-tone images from halftone images with only bilevel pixels. Because the halftone images lose much of the original image content, inverse halftoning is a classic ill-problem. Although existing inverse halftoning algorithms achieve good performance, their results lose image details and features. Therefore, it is still a challenge to recover high-quality continuous-tone images. In this paper, we propose an end-to-end multiscale progressively residual learning network (MSPRL), which has a UNet architecture and takes multiscale input images. To make full use of different input image information, we design a shallow feature extraction module to capture similar features between images of different scales. We systematically study the performance of different methods and compare them with our proposed method. In addition, we employ different training strategies to optimize the model, which is important for optimizing the training process and improving performance. Extensive experiments demonstrate that our MSPRL model obtains considerable performance gains in detail restoration

    FPGA BASED PARALLEL IMPLEMENTATION OF STACKED ERROR DIFFUSION ALGORITHM

    Get PDF
    Digital halftoning is a crucial technique used in digital printers to convert a continuoustone image into a pattern of black and white dots. Halftoning is used since printers have a limited availability of inks and cannot reproduce all the color intensities in a continuous image. Error Diffusion is an algorithm in halftoning that iteratively quantizes pixels in a neighborhood dependent fashion. This thesis focuses on the development and design of a parallel scalable hardware architecture for high performance implementation of a high quality Stacked Error Diffusion algorithm. The algorithm is described in ‘C’ and requires a significant processing time when implemented on a conventional CPU. Thus, a new hardware processor architecture is developed to implement the algorithm and is implemented to and tested on a Xilinx Virtex 5 FPGA chip. There is an extraordinary decrease in the run time of the algorithm when run on the newly proposed parallel architecture implemented to FPGA technology compared to execution on a single CPU. The new parallel architecture is described using the Verilog Hardware Description Language. Post-synthesis and post-implementation, performance based Hardware Description Language (HDL), simulation validation of the new parallel architecture is achieved via use of the ModelSim CAD simulation tool

    Studies on Imaging System and Machine Learning: 3D Halftoning and Human Facial Landmark Localization

    Get PDF
    In this dissertation, studies on digital halftoning and human facial landmark localization will be discussed. 3D printing is becoming increasingly popular around the world today. By utilizing 3D printing technology, customized products can be manufactured much more quickly and efficiently with much less cost. However, 3D printing still suffers from low-quality surface reproduction compared with 2D printing. One approach to improve it is to develop an advanced halftoning algorithm for 3D printing. In this presentation, we will describe a novel method to 3D halftoning that can cooperate with 3D printing technology in order to generate a high-quality surface reproduction. In the second part of this report, a new method named direct element swap to create a threshold matrix for halftoning is proposed. This method directly swaps the elements in a threshold matrix to find the best element arrangement by minimizing a designated perceived error metric. Through experimental results, the new method yields halftone quality that is competitive with the conventional level-by-level matrix design method. Besides, by using direct element swap method, for the first time, threshold matrix can be designed through being trained with real images. In the second part of the dissertation, a novel facial landmark detection system is presented. Facial landmark detection plays a critical role in many face analysis tasks. However, it still remains a very challenging problem. The challenges come from the large variations of face appearance caused by different illuminations, different facial expressions, different yaw, pitch and roll angles of heads and different image qualities. To tackle this problem, a novel coarse-to-fine cascaded convolutional neural network system for robust facial landmark detection of faces in the wild is presented. The experiment result shows our method outperforms other state-of-the-art methods on public test datasets. Besides, a frontal and profile landmark localization system is proposed and designed. By using a frontal/profile face classifier, either frontal landmark configuration or profile landmark configuration is employed in the facial landmark prediction based on the input face yaw angle

    Bayesian Dictionary Learning for Single and Coupled Feature Spaces

    Get PDF
    Over-complete bases offer the flexibility to represent much wider range of signals with more elementary basis atoms than signal dimension. The use of over-complete dictionaries for sparse representation has been a new trend recently and has increasingly become recognized as providing high performance for applications such as denoise, image super-resolution, inpaiting, compression, blind source separation and linear unmixing. This dissertation studies the dictionary learning for single or coupled feature spaces and its application in image restoration tasks. A Bayesian strategy using a beta process prior is applied to solve both problems. Firstly, we illustrate how to generalize the existing beta process dictionary learning method (BP) to learn dictionary for single feature space. The advantage of this approach is that the number of dictionary atoms and their relative importance may be inferred non-parametrically. Next, we propose a new beta process joint dictionary learning method (BP-JDL) for coupled feature spaces, where the learned dictionaries also reflect the relationship between the two spaces. Compared to previous couple feature spaces dictionary learning algorithms, our algorithm not only provides dictionaries that customized to each feature space, but also adds more consistent and accurate mapping between the two feature spaces. This is due to the unique property of the beta process model that the sparse representation can be decomposed to values and dictionary atom indicators. The proposed algorithm is able to learn sparse representations that correspond to the same dictionary atoms with the same sparsity but different values in coupled feature spaces, thus bringing consistent and accurate mapping between coupled feature spaces. Two applications, single image super-resolution and inverse halftoning, are chosen to evaluate the performance of the proposed Bayesian approach. In both cases, the Bayesian approach, either for single feature space or coupled feature spaces, outperforms state-of-the-art methods in comparative domains

    Dithering by Differences of Convex Functions

    Get PDF
    Motivated by a recent halftoning method which is based on electrostatic principles, we analyse a halftoning framework where one minimizes a functional consisting of the difference of two convex functions (DC). One of them describes attracting forces caused by the image gray values, the other one enforces repulsion between points. In one dimension, the minimizers of our functional can be computed analytically and have the following desired properties: the points are pairwise distinct, lie within the image frame and can be placed at grid points. In the two-dimensional setting, we prove some useful properties of our functional like its coercivity and suggest to compute a minimizer by a forward-backward splitting algorithm. We show that the sequence produced by such an algorithm converges to a critical point of our functional. Furthermore, we suggest to compute the special sums occurring in each iteration step by a fast summation technique based on the fast Fourier transform at non-equispaced knots which requires only Ο(m log(m)) arithmetic operations for m points. Finally, we present numerical results showing the excellent performance of our DC dithering method

    The development of the toner density sensor for closed-loop feedback laser printer calibration

    Get PDF
    A new infrared (IR) sensor was developed for application in closed-loop feedback printer calibration as it relates to monochrome (black toner only) laser printers. The toner density IR sensor (TDS) was introduced in the early 1980’s; however, due to cost and limitation of technologies at the time, implementation was not accomplished until within the past decade. Existing IR sensor designs do not discuss/address: • EMI (electromagnetic interference) effects on the sensor due to EP (electrophotography) components • Design considerations for environmental conditions • Sensor response time as it affects printer process speed The toner density sensor (TDS) implemented in the Lexmark E series printer reduces these problems and eliminates the use of the current traditional “open-loop” (meaning feedback are parameters not directly affecting print darkness such as page count, toner level, etc.) calibration process where print darkness is adjusted using previously calculated and stored EP process parameters. The historical process does not have the ability to capture cartridge component variation and environmental changes which affect print darkness variation. The TDS captures real time data which is used to calculate EP process parameters for the adjustment of print darkness; as a result, greatly reducing variations uncontrolled by historical printer calibration. Specifically, the first and primary purpose of this research is to reduce print darkness variation using the TDS. The second goal is to mitigate the TDS EMI implementation issue for reliable data accuracy

    Parallel Algorithm for Hardware Implementation of Inverse Halftoning

    Get PDF
    Abstract— A Parallel algorithm and its hardware implementation of Inverse Halftone operation is proposed in this paper. The algorithm is based on Lookup Tables from which the inverse halftone value of a pixel is directly determined using a pattern of pixels. A method has been developed that allows accessing more than one value from the lookup table at any time. The lookup table is divided into smaller lookup tables, such that each pattern selected at any time goes to a separate smaller lookup table. The 15-pixel parallel version of the algorithm was tested on sample images and a simple and effective method has been used to overcome quality degradation due to pixel loss in the proposed algorithm. It can provide at least 4 times decrease in lookup table size when compared with serial lookup table method implemented multiple times for same number of pixels
    • …
    corecore