27 research outputs found

    How Multi-Illuminant Scenes Affect Automatic Colour Balancing

    Get PDF
    Many illumination-estimation methods are based on the assumption that the imaged scene is lit by a single course of illumination; however, this assumption is often violated in practice. We investigate the effect this has on a suite of illumination-estimation methods by manually sorting the Gehler et al. ColorChecker set of 568 images into the 310 of them that are approximately single-illuminant and the 258 that are clearly multiple-illuminant and comparing the performance of the various methods on the two sets. The Grayworld, Spatio-Spectral-Statistics and Thin-Plate-Spline methods are relatively unaffected, but the other methods are all affected to varying degrees

    Determination of form factors of radial energy transmission

    Get PDF
    В статті розглядаються основні алгоритми розрахунку форм-факторів і освітленості в комп'ютерних моделях випромінюючих систем. З'ясовано можливості врахування самозатінення поверхні складного об'єкта на прикладі випромінюючої поверхні джерела спіралеподібної форми. Визначено форм-фактори випромінювання від джерела на площини за допомогою прийнятного алгоритму оцінки освітленості.In the article the basic algorithms of calculation of form factors and luminosity are examined in the computer models of radiate systems. Possibilities of account of shading itself surface of difficult object are found out on the example of similar to the spiral form radiative surface of source. Form factors of radiation is certain from a source on a planes by means of acceptable algorithm of luminosity estimation

    True colour retrieval from multiple illuminant scene’s image

    Get PDF
    This paper presents an algorithm to retrieve the true colour of an image captured under multiple illuminant. The proposed method uses a histogram analysis and K-means++ clustering technique to split the input image into a number of segments. It then determines normalised average absolute difference (NAAD) for each resulting segment’s colour component. If the NAAD of the segment’s component is greater than an empirically determined threshold. It assumes that the segment does not represent a uniform colour area, hence the segment’s colour component is selected to be used for image colour constancy adjustment. The initial colour balancing factor for each chosen segment’s component is calculated using the Minkowski norm based on the principal that the average values of image colour components are achromatic. It finally calculates colour constancy adjustment factors for each image pixel by fusing the initial colour constancy factors of the chosen segments weighted by the normalised Euclidian distances of the pixel from the centroids of the selected segments. Experimental results using benchmark single and multiple illuminant image datasets, show that the proposed method’s images subjectively exhibit highest colour constancy in the presence of multiple illuminant and also when image contains uniform colour areas

    Colour Constancy For Non‐Uniform Illuminant using Image Textures

    Get PDF
    Colour constancy (CC) is the ability to perceive the true colour of the scene on its image regardless of the scene’s illuminant changes. Colour constancy is a significant part of the digital image processing pipeline, more precisely, where true colour of the object is needed. Most existing CC algorithms assume a uniform illuminant across the whole scene of the image, which is not always the case. Hence, their performance is influenced by the presence of multiple light sources. This paper presents a colour constancy algorithm using image texture for uniform/non-uniformly lit scene images. The propose algorithm applies the K-means algorithm to segment the input image based on its different colour feature. Each segment’s texture is then extracted using the Entropy analysis algorithm. The colour information of the texture pixels is then used to calculate initial colour constancy adjustment factor for each segment. Finally, the colour constancy adjustment factors for each pixel within the image is determined by fusing the colour constancy of all segment regulated by the Euclidian distance of each pixel from the centre of the segments. Experimental results on both single and multiple illuminant image datasets show that the proposed algorithm outperforms the existing state of the art colour constancy algorithms, particularly when the images lit by multiple light sources

    Scene illumination classification based on histogram quartering of CIE-Y component

    Get PDF
    Despite the rapidly expanding research into various aspects of illumination estimation methods, there are limited number of studies addressing illumination classification for different purposes. The increasing demand for color constancy process, wide application of it and high dependency of color constancy to illumination estimation makes this research topic challenging. Definitely, an accurate estimation of illumination in the image will provide a better platform for doing correction and finally will lead in better color constancy performance. The main purpose of any illumination estimation algorithm from any type and class is to estimate an accurate number as illumination. In scene illumination estimation dealing with large range of illumination and small variation of it is critical. Those algorithms which performed estimation carrying out lots of calculation that leads in expensive methods in terms of computing resources. There are several technical limitations in estimating an accurate number as illumination. In addition using light temperature in all previous studies leads to have complicated and computationally expensive methods. On the other hand classification is appropriate for applications like photography when most of the images have been captured in a small set of illuminants like scene illuminant. This study aims to develop a framework of image illumination classifier that is capable of classifying images under different illumination levels with an acceptable accuracy. The method will be tested on real scene images captured with illumination level is measured. This method is a combination of physic based methods and data driven (statistical) methods that categorize the images based on statistical features extracted from illumination histogram of image. The result of categorization will be validated using inherent illumination data of scene. Applying the improving algorithm for characterizing histograms (histogram quartering) handed out the advantages of high accuracy. A trained neural network which is the parameters are tuned for this specific application has taken into account in order to sort out the image into predefined groups. Finally, for performance and accuracy evaluation misclassification error percentages, Mean Square Error (MSE), regression analysis and response time are used. This developed method finally will result in a high accuracy and straightforward classification system especially for illumination concept. The results of this study strongly demonstrate that light intensity with the help of a perfectly tuned neural network can be used as the light property to establish a scene illumination classification system

    Linear color correction for multiple illumination changes and non-overlapping cameras

    Get PDF
    Many image processing methods, such as techniques for people re-identification, assume photometric constancy between different images. This study addresses the correction of photometric variations based upon changes in background areas to correct foreground areas. The authors assume a multiple light source model where all light sources can have different colours and will change over time. In training mode, the authors learn per-location relations between foreground and background colour intensities. In correction mode, the authors apply a double linear correction model based on learned relations. This double linear correction includes a dynamic local illumination correction mapping as well as an inter-camera mapping. The authors evaluate their illumination correction by computing the similarity between two images based on the earth mover's distance. The authors compare the results to a representative auto-exposure algorithm found in the recent literature plus a colour correction one based on the inverse-intensity chromaticity. Especially in complex scenarios the authors’ method outperforms these state-of-the-art algorithms
    corecore