100 research outputs found

    Linear color correction for multiple illumination changes and non-overlapping cameras

    Get PDF
    Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device

    Preferred color correction for mixed taking-illuminant placement and cropping

    Get PDF
    The growth of automatic layout capabilities for publications such as photo books and image sharing websites enables consumers to create personalized presentations without much experience or the use of professional page design software. Automated color correction of images has been well studied over the years, but the methodology for determining how to correct images has almost exclusively considered images as independent indivisible objects. In modern documents, such as photo books or web sharing sites, images are automatically placed on pages in juxtaposition to others and some images are automatically cropped. Understanding how color correction preferences are impacted by complex arrangements has become important. A small number of photographs taken under a variety illumination conditions were presented to observers both individually and in combinations. Cropped and uncropped versions of the shots were included. Users had opportunities to set preferred color balance and chroma for the images within the experiment. Analyses point toward trends indicating a preference for higher chroma for most cropped images in comparison to settings for the full spatial extent images. It is also shown that observers make different color balance choices when correcting an image in isolation versus when correcting the same image in the presence of a second shot taken under a different illuminant. Across 84 responses, approximately 60% showed the tendency to choose image white points that were further from the display white point when multiple images from different taking illuminants were simultaneously presented versus when the images were adjusted in isolation on the same display. Observers were also shown to preserve the relative white point bias of the original taking illuminants

    A testing procedure to characterize color and spatial quality of digital cameras used to image cultural heritage

    Get PDF
    A testing procedure for characterizing both the color and spatial image quality of trichromatic digital cameras, which are used to photograph paintings in cultural heritage institutions, is described. This testing procedure is target-based, thus providing objective measures of quality. The majority of the testing procedure followed current standards from national and international organizations such as ANSI, ISO, and IEC. The procedure was developed in an academic research laboratory and used to benchmark four representative American museum’s digital-camera systems and workflows. The quality parameters tested included system spatial uniformity, tone reproduction, color reproduction accuracy, noise, dynamic range, spatial cross-talk, spatial frequency response, color-channel registration, and depth of field. In addition, two paintings were imaged and processed through each museum’s normal digital workflow. The results of the four case studies showed many dissimilarities among the digital-camera systems and workflows of American museums, which causes a significant range in the archival quality of their digital masters

    Mapping colour in image stitching applications

    Get PDF
    Digitally, panoramic pictures can be assembled from several individual, overlapping photographs. While the geometric alignment of these photographs has retained a lot of attention from the computer vision community, the mapping of colour, i.e. the correction of colour mismatches, has not been studied extensively. In this article, we analyze the colour rendering of today’s digital photographic systems, and propose a method to correct for colour differences. The colour correction consists in retrieving linearized relative scene referred data from uncalibrated images by estimating the Opto-Electronic Conversion Function (OECF) and correcting for exposure, white-point, and vignetting variations between the individual pictures. Different OECF estimation methods are presented and evaluated in conjunction with motion estimation. The resulting panoramas, shown on examples using slides and digital photographs, yield much-improved visual quality compared to stitching using only motion estimation. Additionally, we show that colour correction can also improve the geometrical alignment

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Image Color Correction, Enhancement, and Editing

    Get PDF
    This thesis presents methods and approaches to image color correction, color enhancement, and color editing. To begin, we study the color correction problem from the standpoint of the camera's image signal processor (ISP). A camera's ISP is hardware that applies a series of in-camera image processing and color manipulation steps, many of which are nonlinear in nature, to render the initial sensor image to its final photo-finished representation saved in the 8-bit standard RGB (sRGB) color space. As white balance (WB) is one of the major procedures applied by the ISP for color correction, this thesis presents two different methods for ISP white balancing. Afterwards, we discuss another scenario of correcting and editing image colors, where we present a set of methods to correct and edit WB settings for images that have been improperly white-balanced by the ISP. Then, we explore another factor that has a significant impact on the quality of camera-rendered colors, in which we outline two different methods to correct exposure errors in camera-rendered images. Lastly, we discuss post-capture auto color editing and manipulation. In particular, we propose auto image recoloring methods to generate different realistic versions of the same camera-rendered image with new colors. Through extensive evaluations, we demonstrate that our methods provide superior solutions compared to existing alternatives targeting color correction, color enhancement, and color editing

    Colour constancy in simple and complex scenes

    Get PDF
    PhD ThesisColour constancy is defined as the ability to perceive the surface colours of objects within scenes as approximately constant through changes in scene illumination. Colour constancy in real life functions so seamlessly that most people do not realise that the colour of the light emanating from an object can change markedly throughout the day. Constancy measurements made in simple scenes constructed from flat coloured patches do not produce constancy of this high degree. The question that must be asked is: what are the features of everyday scenes that improve constancy? A novel technique is presented for testing colour constancy. Results are presented showing measurements of constancy in simple and complex scenes. More specifically, matching experiments are performed for patches against uniform and multi-patch backgrounds, the latter of which provide colour contrast. Objects created by the addition of shape and 3-D shading information are also matched against backgrounds consisting of matte reflecting patches. In the final set of experiments observers match detailed depictions of objects - rich in chromatic contrast, shading, mutual illumination and other real life features - within depictions of real life scenes. The results show similar performance across the conditions that contain chromatic contrast, although some uncertainty still remains as to whether the results are indicative of human colour constancy performance or to sensory match capabilities. An interesting division exists between patch matches performed against uniform and multi-patch backgrounds that is manifested as a shift in CIE xy space. A simple model of early chromatic processes is proposed and examined in the context of the results
    • …
    corecore