1,859 research outputs found

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects

    Full text link
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available

    Filling-in the Forms: Surface and Boundary Interactions in Visual Cortex

    Full text link
    Defense Advanced Research Projects Agency and the Office of Naval Research (NOOOI4-95-l-0409); Office of Naval Research (NOOO14-95-1-0657)

    A versatile microfadometer for lightfastness testing and pigment identification

    Get PDF
    The design and experimental method for the use of a novel instrument for lightfastness measurements on artwork is presented. The new microfadometer design offers increased durability and portability over the previous, published design, broadening the scope of locations at which data can be acquired. This reduces the need for art handling or transportation in order to gain evidence-based risk assessments for the display of light-sensitive artworks. The instrument focuses a stabilized high powered xenon lamp to a spot 0.25 millimeters (FWHM) while simultaneously monitoring color change. This makes it possible to identify pigments and determine the lightfastness of materials effectively and non-destructively. With 2.59mW or 0.82 lumens (1.7 x107 lux for a 0.25mm focused spot) the instrument is capable of fading Blue Wool 1 to a measured 11 ΔEab value (using CIE standard illuminant D65) in 15 minutes. The temperature increase created by focused radiation was measured to be 3 to 4°C above room temperature. The system was stable within 0.12 ΔEab over 1 hour and 0.31 ΔEab over 7 hours. A safety evaluation of the technique is discussed which concludes that some caution should be employed when fading smooth, uniform areas of artworks. The instrument can also incorporate a linear variable filter. This enables the researcher to identify the active wavebands that cause certain degradation reactions and determine the degree of wavelength dependence of fading. Some preliminary results of fading experiments on Prussian blue samples from the paint box of J. M. W Turner (1755-1851) are presented

    Color Managing for Papers Containing Optical Brightening Agents

    Get PDF
    The role of a color-managed inkjet proof is to predict and simulate the visual appearance of printed color. The proof-to-print visual match works well under different viewing conditions when the input ICC profile and the output ICC profile, built from characterization datasets, do not contain optical brightening agents (OBA). OBAs influence printed color when measured for characterization and viewed. These brightening agents absorb UV wavelengths in the illuminant and fluoresce in the blue wavelengths. As more and more OBAs are used in printing paper production, the role of color proofing becomes more difficult. The difference in the amount of the UV component of the measuring and viewing light sources cause a problem where the OBA effect, as measured, may not be the same amount of OBA effect that should be proofed under the viewing illuminant. There are two objectives in this research project. The first objective is to show how printed colors, under identical printing conditions on OBA and non-OBA substrates, look different than when they are proofed using current characterization for proofing practices. Both M0 (UV-included) and M2 (UV-cut) measurement data are collected from color patches with selected tonal values and input ICC profiles created from this data are used to proof the brightened reference print. The results show that the UV-cut characterization treatment produces a very poor proof to the reference, while the UV-included proof was ranked as a fairly high match. A third commercially available software designed to improve upon the UV-included treatment, the X-Rite Optical Brightened Compensation module, was also tested and found to be a good match to the reference as well. The second objective is to propose different ways the characterization data can be adjusted for the OBAs in a reference print on brightened paper, by accounting for the influence of UV in the measurement illuminant, and the influence of UV in the viewing illuminant. By means of psychometric analyses, the results show that (1) the proof-to- print match is the worst when OBA in print and UV in the measurement illuminant are not addressed (UV-cut characterization data from M2); (2) although not conclusive, the proof-to-print match improves when OBA in print, UV in the measurement illuminant (characterization data from M0), and UV in the viewing illuminant are addressed

    Ridge Regression Approach to Color Constancy

    Get PDF
    This thesis presents the work on color constancy and its application in the field of computer vision. Color constancy is a phenomena of representing (visualizing) the reflectance properties of the scene independent of the illumination spectrum. The motivation behind this work is two folds:The primary motivation is to seek ‘consistency and stability’ in color reproduction and algorithm performance respectively because color is used as one of the important features in many computer vision applications; therefore consistency of the color features is essential for high application success. Second motivation is to reduce ‘computational complexity’ without sacrificing the primary motivation.This work presents machine learning approach to color constancy. An empirical model is developed from the training data. Neural network and support vector machine are two prominent nonlinear learning theories. The work on support vector machine based color constancy shows its superior performance over neural networks based color constancy in terms of stability. But support vector machine is time consuming method. Alternative approach to support vectormachine, is a simple, fast and analytically solvable linear modeling technique known as ‘Ridge regression’. It learns the dependency between the surface reflectance and illumination from a presented training sample of data. Ridge regression provides answer to the two fold motivation behind this work, i.e., stable and computationally simple approach. The proposed algorithms, ‘Support vector machine’ and ‘Ridge regression’ involves three step processes: First, an input matrix constructed from the preprocessed training data set is trained toobtain a trained model. Second, test images are presented to the trained model to obtain the chromaticity estimate of the illuminants present in the testing images. Finally, linear diagonal transformation is performed to obtain the color corrected image. The results show the effectiveness of the proposed algorithms on both calibrated and uncalibrated data set in comparison to the methods discussed in literature review. Finally, thesis concludes with a complete discussion and summary on comparison between the proposed approaches and other algorithms

    A novel Analysis of Image Forgery Detection Using SVM

    Full text link
    This paper deals with basic information regarding the face recognition and whole parameters that effects the face structure and face shape. For the calculation of age, clients utilize age function combined with aging way. Face recognition is most difficult field of pattern recognition, however research in this field almost attains constancy with new difficulties emerges with time, and the research again towards the problem encounters due to aging, an automatic age technique utilized for strong face recognition is given briefly. Then user use age, commonly vector generating function or feature vector of real image to create synthesized feature vectors at target age. User uses a structure and texture vectors to show a facial image by projecting it in Eigen space of shape or texture. Images in courtrooms for evidence, graphics in newspapers and magazines, and digital graphics used by doctors are few instances that needs for pictures and not using a manipulation. Earlier, SVM algorithm failed in many instances in detection of forged picture. For the reason that single characteristic extraction algorithm, just isn#39t capable to include the certain function of the pictures. So you can overcome drawbacks of existing algorithm. We can use meta-fusion technique of HOG and Sasi elements classifier also to beat the drawback of SVM classifier.nbs
    • 

    corecore