48 research outputs found
Digital Color Imaging
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided
VERIFICATION AND DEBUG TECHNIQUES FOR INTEGRATED CIRCUIT DESIGNS
Verification and debug of integrated circuits for embedded applications has grown in importance as the complexity in function has increased dramatically over time. Various modeling and debugging techniques have been developed to overcome the overwhelming challenge. This thesis attempts to address verification and debug methods by presenting an accurate C model at the bit and algorithm level coupled with an implemented Hardware Description Language (HDL). Key concepts such as common signal and variable naming conventions are incorporated as well as a stepping function within the implemented HDL. Additionally, a common interface between low-level drivers and C models is presented for early firmware development and system debug. Finally, selfchecking verification is discussed for delivering multiple test cases along with testbench portability
Novel methods in image halftoning
Ankara : Department of Electrical and Electronics Engineering and Institute of Engineering and Science, Bilkent Univ., 1998.Thesis (Master's) -- Bilkent University, 1998.Includes bibliographical references leaves 97-101Halftoning refers to the problem of rendering continuous-tone (contone) images on display and printing devices which are capable of reproducing only a limited number of colors. A new adaptive halftoning method using the adaptive QR- RLS algorithm is developed for error diffusion which is one of the halftoning techniques. Also, a diagonal scanning strategy to exploit the human visual system properties in processing the image is proposed. Simulation results on color images demonstrate the superior quality of the new method compared to the existing methods. Another problem studied in this thesis is inverse halftoning which is the problem of recovering a contone image from a given halftoned image. A novel inverse halftoning method is developed for restoring a contone image from the halftoned image. A set theoretic formulation is used where sets are defined using the prior information about the problem. A new space domain projection is introduced assuming the halftoning is performed ,with error diffusion, and the error diffusion filter kernel is known. The space domain, frequency domain, and space-scale domain projections are used alternately to obtain a feasible solution for the inverse halftoning problem which does not have a unique solution. Simulation results for both grayscale and color images give good results, and demonstrate the effectiveness of the proposed inverse halftoning method.Bozkurt, GözdeM.S
Perceptual error optimization for Monte Carlo rendering
Realistic image synthesis involves computing high-dimensional light transport
integrals which in practice are numerically estimated using Monte Carlo
integration. The error of this estimation manifests itself in the image as
visually displeasing aliasing or noise. To ameliorate this, we develop a
theoretical framework for optimizing screen-space error distribution. Our model
is flexible and works for arbitrary target error power spectra. We focus on
perceptual error optimization by leveraging models of the human visual system's
(HVS) point spread function (PSF) from halftoning literature. This results in a
specific optimization problem whose solution distributes the error as visually
pleasing blue noise in image space. We develop a set of algorithms that provide
a trade-off between quality and speed, showing substantial improvements over
prior state of the art. We perform evaluations using both quantitative and
perceptual error metrics to support our analysis, and provide extensive
supplemental material to help evaluate the perceptual improvements achieved by
our methods
Perceptual Error Optimization for {Monte Carlo} Rendering
Realistic image synthesis involves computing high-dimensional light transport integrals which in practice are numerically estimated using Monte Carlo integration. The error of this estimation manifests itself in the image as visually displeasing aliasing or noise. To ameliorate this, we develop a theoretical framework for optimizing screen-space error distribution. Our model is flexible and works for arbitrary target error power spectra. We focus on perceptual error optimization by leveraging models of the human visual system's (HVS) point spread function (PSF) from halftoning literature. This results in a specific optimization problem whose solution distributes the error as visually pleasing blue noise in image space. We develop a set of algorithms that provide a trade-off between quality and speed, showing substantial improvements over prior state of the art. We perform evaluations using both quantitative and perceptual error metrics to support our analysis, and provide extensive supplemental material to help evaluate the perceptual improvements achieved by our methods
A New framework for an electrophotographic printer model
Digital halftoning is a printing technology that creates the illusion of continuous tone images for printing devices such as electrophotographic printers that can only produce a limited number of tone levels. Digital halftoning works because the human visual system has limited spatial resolution which blurs the printed dots of the halftone image, creating the gray sensation of a continuous tone image. Because the printing process is imperfect it introduces distortions to the halftone image. The quality of the printed image depends, among other factors, on the complex interactions between the halftone image, the printer characteristics, the colorant, and the printing substrate. Printer models are used to assist in the development of new types of halftone algorithms that are designed to withstand the effects of printer distortions. For example, model-based halftone algorithms optimize the halftone image through an iterative process that integrates a printer model within the algorithm. The two main goals of a printer model are to provide accurate estimates of the tone and of the spatial characteristics of the printed halftone pattern. Various classes of printer models, from simple tone calibrations, to complex mechanistic models, have been reported in the literature. Existing models have one or more of the following limiting factors: they only predict tone reproduction, they depend on the halftone pattern, they require complex calibrations or complex calculations, they are printer specific, they reproduce unrealistic dot structures, and they are unable to adapt responses to new data. The two research objectives of this dissertation are (1) to introduce a new framework for printer modeling and (2) to demonstrate the feasibility of such a framework in building an electrophotographic printer model. The proposed framework introduces the concept of modeling a printer as a texture transformation machine. The basic premise is that modeling the texture differences between the output printed images and the input images encompasses all printing distortions. The feasibility of the framework was tested with a case study modeling a monotone electrophotographic printer. The printer model was implemented as a bank of feed-forward neural networks, each one specialized in modeling a group of textural features of the printed halftone pattern. The textural features were obtained using a parametric representation of texture developed from a multiresolution decomposition proposed by other researchers. The textural properties of halftone patterns were analyzed and the key texture parameters to be modeled by the bank were identified. Guidelines for the multiresolution texture decomposition and the model operational parameters and operational limits were established. A method for the selection of training sets based on the morphological properties of the halftone patterns was also developed. The model is fast and has the capability to continue to learn with additional training. The model can be easily implemented because it only requires a calibrated scanner. The model was tested with halftone patterns representing a range of spatial characteristics found in halftoning. Results show that the model provides accurate predictions for the tone and the spatial characteristics when modeling halftone patterns individually and it provides close approximations when modeling multiple halftone patterns simultaneously. The success of the model justifies continued research of this new printer model framework
Radial Basis Functions: Biomedical Applications and Parallelization
Radial basis function (RBF) is a real-valued function whose values depend only on the distances between an interpolation point and a set of user-specified points called centers. RBF interpolation is one of the primary methods to reconstruct functions from multi-dimensional scattered data. Its abilities to generalize arbitrary space dimensions and to provide spectral accuracy have made it particularly popular in different application areas, including but not limited to: finding numerical solutions of partial differential equations (PDEs), image processing, computer vision and graphics, deep learning and neural networks, etc.
The present thesis discusses three applications of RBF interpolation in biomedical engineering areas: (1) Calcium dynamics modeling, in which we numerically solve a set of PDEs by using meshless numerical methods and RBF-based interpolation techniques; (2) Image restoration and transformation, where an image is restored from its triangular mesh representation or transformed under translation, rotation, and scaling, etc. from its original form; (3) Porous structure design, in which the RBF interpolation used to reconstruct a 3D volume containing porous structures from a set of regularly or randomly placed points inside a user-provided surface shape. All these three applications have been investigated and their effectiveness has been supported with numerous experimental results. In particular, we innovatively utilize anisotropic distance metrics to define the distance in RBF interpolation and apply them to the aforementioned second and third applications, which show significant improvement in preserving image features or capturing connected porous structures over the isotropic distance-based RBF method.
Beside the algorithm designs and their applications in biomedical areas, we also explore several common parallelization techniques (including OpenMP and CUDA-based GPU programming) to accelerate the performance of the present algorithms. In particular, we analyze how parallel programming can help RBF interpolation to speed up the meshless PDE solver as well as image processing. While RBF has been widely used in various science and engineering fields, the current thesis is expected to trigger some more interest from computational scientists or students into this fast-growing area and specifically apply these techniques to biomedical problems such as the ones investigated in the present work
Recommended from our members
AN APPRAISAL OF THE DEVELOPMENTS IN THE REPRODUCTION OF COLOUR IN COMPUTER PUBLISHING SYSTEMS
The plethora of coloured images that are reproduced in any printed media is facilitated through a variety of related processes that collectively constitute traditional printing techniques. The aim of this research is to appraise recent developments that have occurred within the colour prepress process. The colour prepress process involves the preparation of colour separated halftone films that are used in the production of offset lithographic printing plates. Over recent years the application of desktop publishing technology to perform many of the functions associated with the colour prepress process has raised a number of significant issues and debates.
The reproduction of coloured images in the printed medium demands that certain fundamental criteria are adhered to in order to maintain professional standards of colour fidelity. Such criteria include: successful digital halftone production, the elimination of moiré patterns, and maintaining colour fidelity between the coloured original and the coloured reproduction. This research thesis shall therefore establish the principles and techniques involved in the reproduction of colour in a printed medium. It will also asses whether desktop publishing systems are able to facilitate successful professional colour reproduction by examining current debates that challenge the viability of desktop publishing solutions. Current debates concerning desktop publishing solutions are primarily concerned with assessing the value of Adobe PostScript level 2 solutions, computer interchange spaces for colour matching purposes, and rational supercell techniques that attempt to eliminate moiré patterns. The research also attempts to establish the validity of current debate findings by comparing them with statistics derived hom a questionnaire (undertaken as part of the research program) that seeks the opinions of system users on the effectiveness of their individual systems at processing and delivering acceptable colour separations
Analysis of functionally graded material object representation methods
Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Ocean Engineering, 2000.Includes bibliographical references (leaves 218-224).Solid Freeform Fabrication (SFF) processes have demonstrated the ability to produce parts with locally controlled composition. To exploit this potential, methods to represent and exchange parts with varying local composition need to be proposed and evaluated. In modeling such parts efficiently, any such method should provide a concise and accurate description of all of the relevant information about the part with minimal cost in terms of storage. To address these issues, several approaches to modeling Functionally Graded Material (FGM) objects are evaluated based on their memory requirements. Through this research, an information pathway for processing FGM objects based on image processing is proposed. This pathway establishes a clear separation between design of FGM objects, their processing, and their fabrication. Similar to how an image is represented by a continuous vector valued function of the intensity of the primary colors over a two-dimensional space, an FGM object is represented by a vector valued function spanning a Material Space, defined over the three dimensional Build Space. Therefore, the Model Space for FGM objects consists of a Build Space and a Material Space. The task of modeling and designing an FGM object, therefore, is simply to accurately represent the function m(x) where x E Build Space. Data structures for representing FGM objects are then described and analyzed, including a voxel based structure, finite element method, and the extension of the Radial-Edge and Cell-Tuple-Graph data structures mains in order to represent spatially varying properties. All of the methods are capable of defining the function m(x) but each does so in a different way. Along with introducing each data structure, the storage cost for each is derived in terms of the number of instances of each of its fundamental classes required to represent an object. In order to determine the optimal data structure to model FGM objects, the storage cost associated with each data structure for representing several hypothetical models is calculated. Although these models are simple in nature, their curved geometries and regions of both piece-wise constant and non-linearly graded compositions reflect the features expected to be found in real applications. In each case, the generalized cellular methods are found to be optimal, accurately representing the intended design.by Todd Robert Jackson.Ph.D
A survey of computer uses in music
This thesis covers research into the mathematical basis inherent in music including review of projects related to optical character recognition (OCR) of musical symbols. Research was done about fractals creating new pieces by assigning pitches to numbers. Existing musical pieces can be taken apart and reassembled creating new ideas for composers. Musical notation understanding is covered and its requirement for the recognition of a music sheet by the computer for editing and reproduction purposes is explained. The first phase of a musical OCR was created in this thesis with the recognition of staff lines on a good quality image. Modifications will need to be made to take care of noise and tilted images that may result from scanning