133 research outputs found

    Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects

    Full text link
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available

    Appearance estimation and reconstruction of glossy object surfaces based on the dichromatic reflection model

    Get PDF
    We propose an approach for estimating and reconstructing the material appearance of objects based on the spectral image data acquired in complex illumination environments with multiple light sources. The object appearance can be constructed with various material properties such as spectral reflectance, glossiness, and matteness under different geometric and spectral illumination conditions. The objects are assumed to be made of an inhomogeneous dielectric material with gloss or specularity. The color signals from the object surface are described by the standard dichromatic reflection model, which consists of diffuse and specular reflections, where the specular component has the same spectral composition as the illuminant. The overall appearance of objects is determined by a combination of chromatic factors, based on the reflectance and illuminant spectra, and shading terms, which represent the surface geometries of the surface illumination. Therefore, the appearance of a novel object can be reconstructed by modifying the chromatic factors and shading terms. The method for appearance estimation and reconstruction comprises four steps: (1) illuminant estimation, (2) spectral reflectance estimation, (3) shading term estimation and region segmentation, and (4) appearance reconstruction based on the reflection model. The proposed approach is validated in an experiment in which objects of different materials are illuminated using different light sources. We demonstrate typical reconstruction results with novel object appearances

    Recovering light directions and camera poses from a single sphere

    Get PDF
    LNCS v. 5302 is the conference proceedings of ECCV 2008This paper introduces a novel method for recovering both the light directions and camera poses from a single sphere. Traditional methods for estimating light directions using spheres either assume both the radius and center of the sphere being known precisely, or they depend on multiple calibrated views to recover these parameters. It will be shown in this paper that the light directions can be uniquely determined from the specular highlights observed in a single view of a sphere without knowing or recovering the exact radius and center of the sphere. Besides, if the sphere is being observed by multiple cameras, its images will uniquely define the translation vector of each camera from a common world origin centered at the sphere center. It will be shown that the relative rotations between the cameras can be recovered using two or more light directions estimated from each view. Closed form solutions for recovering the light directions and camera poses are presented, and experimental results on both synthetic and real data show the practicality of the proposed method. © 2008 Springer Berlin Heidelberg.postprintThe 10th European Conference on Computer Vision (ECCV 2008), Marseille, France, 12-18 October 2008. In Lecture Notes in Computer Science, 2008, v. 5302, pt. 1, p. 631-64

    Estimating varying illuminant colours in images

    Get PDF
    Colour Constancy is the ability to perceive colours independently of varying illumi-nation colour. A human could tell that a white t-shirt was indeed white, even under the presence of blue or red illumination. These illuminant colours would actually make the reflectance colour of the t-shirt bluish or reddish. Humans can, to a good extent, see colours constantly. Getting a computer to achieve the same goal, with a high level of accuracy has proven problematic. Particularly if we wanted to use colour as a main cue in object recognition. If we trained a system on object colours under one illuminant and then tried to recognise the objects under another illuminant, the system would likely fail. Early colour constancy algorithms assumed that an image contains a single uniform illuminant. They would then attempt to estimate the colour of the illuminant to apply a single correction to the entire image. It’s not hard to imagine a scenario where a scene is lit by more than one illuminant. If we take the case of an outdoors scene on a typical summers day, we would see objects brightly lit by sunlight and others that are in shadow. The ambient light in shadows is known to be a different colour to that of direct sunlight (bluish and yellowish respectively). This means that there are at least two illuminant colours to be recovered in this scene. This thesis focuses on the harder case of recovering the illuminant colours when more than one are present in a scene. Early work on this subject made the empirical observation that illuminant colours are actually very predictable compared to surface colours. Real-world illuminants tend not to be greens or purples, but rather blues, yellows and reds. We can think of an illuminant mapping as the function which takes a scene from some unknown illuminant to a known illuminant. We model this mapping as a simple multiplication of the Red, Green and Blue channels of a pixel. It turns out that the set of realistic mappings approximately lies on a line segment in chromaticity space. We propose an algorithm that uses this knowledge and only requires two pixels of the same surface under two illuminants as input. We can then recover an estimate for the surface reflectance colour, and subsequently the two illuminants. Additionally in this thesis, we propose a more robust algorithm that can use vary-ing surface reflectance data in a scene. One of the most successful colour constancy algorithms, known Gamut Mappping, was developed by Forsyth (1990). He argued that the illuminant colour of a scene naturally constrains the surfaces colours that are possible to perceive. We couldn’t perceive a very chromatic red under a deep blue illuminant. We introduce our multiple illuminant constraint in a Gamut Mapping context and are able to further improve it’s performance. The final piece of work proposes a method for detecting shadow-edges, so that we can automatically recover estimates for the illuminant colours in and out of shadow. We also formulate our illuminant estimation algorithm in a voting scheme, that probabilistically chooses an illuminant estimate on both sides of the shadow edge. We test the performance of all our algorithms experimentally on well known datasets, as well as our new proposed shadow datasets

    Estimation of illuminants from color signals of illuminated objects

    Get PDF
    Color constancy is the ability of the human visual systems to discount the effect of the illumination and to assign approximate constant color descriptions to objects. This ability has long been studied and widely applied to many areas such as color reproduction and machine vision, especially with the development of digital color processing. This thesis work makes some improvements in illuminant estimation and computational color constancy based on the study and testing of existing algorithms. During recent years, it has been noticed that illuminant estimation based on gamut comparison is efficient and simple to implement. Although numerous investigations have been done in this field, there are still some deficiencies. A large part of this thesis has been work in the area of illuminant estimation through gamut comparison. Noting the importance of color lightness in gamut comparison, and also in order to simplify three-dimensional gamut calculation, a new illuminant estimation method is proposed through gamut comparison at separated lightness levels. Maximum color separation is a color constancy method which is based on the assumption that colors in a scene will obtain the largest gamut area under white illumination. The method was further derived and improved in this thesis to make it applicable and efficient. In addition, some intrinsic questions in gamut comparison methods, for example the relationship between the color space and the application of gamut or probability distribution, were investigated. Color constancy methods through spectral recovery have the limitation that there is no effective way to confine the range of object spectral reflectance. In this thesis, a new constraint on spectral reflectance based on the relative ratios of the parameters from principal component analysis (PCA) decomposition is proposed. The proposed constraint was applied to illuminant detection methods as a metric on the recovered spectral reflectance. Because of the importance of the sensor sensitivities and their wide variation, the influence from the sensor sensitivities on different kinds of illuminant estimation methods was also studied. Estimation method stability to wrong sensor information was tested, suggesting the possible solution to illuminant estimation on images with unknown sources. In addition, with the development of multi-channel imaging, some research on illuminant estimation for multi-channel images both on the correlated color temperature (CCT) estimation and the illuminant spectral recovery was performed in this thesis. All the improvement and new proposed methods in this thesis are tested and compared with those existing methods with best performance, both on synthetic data and real images. The comparison verified the high efficiency and implementation simplicity of the proposed methods

    Colour Constancy: Cues, Priors and Development

    Get PDF
    Colour is crucial for detecting, recognising, and interacting with objects. However, the reflected wavelength of light ("colour") varies vastly depending on the illumination. Whilst adults can judge colours as relatively invariant under changing illuminations (colour constancy), much remains unknown, which this thesis aims to resolve. Firstly, previous studies have shown adults can use certain cues to estimate surface colour. However, one proposed cue - specular highlights - has been little researched so this is explored here. Secondly, the existing data on a daylight prior for colour constancy remain inconclusive so we aimed to further investigate this. Finally, no studies have investigated the development of colour constancy during childhood so the third aim is to determine at what age colour constancy becomes adult-like. In the introduction, existing research is discussed, including cues to the illuminant, daylight priors, and the development of perceptual constancies. The second chapter contains three experiments conducted to determine whether adults can use a specular highlight cue and/ or daylight prior to aid colour constancy. Results showed adults can use specular highlights when other cues are weakened. Evidence for a daylight prior was weak. In the third chapter the development of colour constancy during childhood was investigated by developing a novel child-friendly task. Children had higher constancy than adults, and evidence for a daylight prior was mixed. The final experimental chapter used the task developed in Chapter 3 to ask whether children can use specular highlights as a cue for colour constancy. Testing was halted early due to the coronavirus pandemic, yet the data obtained suggest that children are negatively impacted by specular highlights. Finally, in the general discussion, the results of the six experiments are brought together to draw conclusions regarding the use of cues and priors, and the development of colour constancy. Implications and future directions for research are discussed

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    Ridge Regression Approach to Color Constancy

    Get PDF
    This thesis presents the work on color constancy and its application in the field of computer vision. Color constancy is a phenomena of representing (visualizing) the reflectance properties of the scene independent of the illumination spectrum. The motivation behind this work is two folds:The primary motivation is to seek ‘consistency and stability’ in color reproduction and algorithm performance respectively because color is used as one of the important features in many computer vision applications; therefore consistency of the color features is essential for high application success. Second motivation is to reduce ‘computational complexity’ without sacrificing the primary motivation.This work presents machine learning approach to color constancy. An empirical model is developed from the training data. Neural network and support vector machine are two prominent nonlinear learning theories. The work on support vector machine based color constancy shows its superior performance over neural networks based color constancy in terms of stability. But support vector machine is time consuming method. Alternative approach to support vectormachine, is a simple, fast and analytically solvable linear modeling technique known as ‘Ridge regression’. It learns the dependency between the surface reflectance and illumination from a presented training sample of data. Ridge regression provides answer to the two fold motivation behind this work, i.e., stable and computationally simple approach. The proposed algorithms, ‘Support vector machine’ and ‘Ridge regression’ involves three step processes: First, an input matrix constructed from the preprocessed training data set is trained toobtain a trained model. Second, test images are presented to the trained model to obtain the chromaticity estimate of the illuminants present in the testing images. Finally, linear diagonal transformation is performed to obtain the color corrected image. The results show the effectiveness of the proposed algorithms on both calibrated and uncalibrated data set in comparison to the methods discussed in literature review. Finally, thesis concludes with a complete discussion and summary on comparison between the proposed approaches and other algorithms

    Determinants of colour constancy

    Get PDF
    Colour constancy describes the ability of our visual system to keep colour percepts stable through illumination changes. This is an outstanding feat given that in the retinal image surface and illuminant properties are conflated. Still, in our everyday lives we are able attribute stable colour-labels to objects to make communication economic and efficient. Past research shows colour constancy to be imperfect, compensating for 40% and 80% of the illumination change. While different constancy determinants are suggested, no carefully controlled study shows perfect constancy. The first study presented here addresses the issue of imperfect constancy by investigating colour constancy in a cue rich environment, using a task that resembles our everyday experience with colours. Participants were asked to recall the colour of unique personal objects in natural environment under four chromatic illuminations. This approach yielded perfect colour constancy. The second study investigated the relation between illumination discrimination and chromatic detection. Recent studies using an illumination discrimination paradigm suggest that colour constancy is optimized for bluish daylight illuminations. Because it is not clear if illumination discrimination is directly related to colour constancy or is instead explained by sensitivity to changes in chromaticity of different hues, thresholds for illumination discrimination and chromatic detection for the same 12 illumination hues were compared. While the reported blue bias could be replicated, thresholds for illumination discrimination and chromatic detection were highly related, indicating that lower sensibility towards bluish hues is not exclusive to illumination discrimination. Accompanying the second study, the third study investigated the distribution of colour constancy for 40 chromatic illuminations of different hue using achromatic adjustments and colour naming. These measurements were compared to several determinants of colour constancy, including the daylight locus, colour categories, illumination discrimination, chromatic detection, relational colour constancy and metameric mismatching. In accordance with the observations in study 2, achromatic adjustments revealed a bias towards bluish daylight illumination. This blue bias and naming consistency explained most of the variance in achromatic adjustments, while illumination discrimination was not directly related to colour constancy. The fourth study examined colour memory biases. Past research shows that colours of objects are remembered as being more saturated than they are perceived. These works often used natural objects that exist in a variety of colour and hue, such as grass or bananas. The approach presented here directly compared perceived and memorized colours for unique objects, used also in the first study, and confirmed the previous findings that on average, objects were remembered more saturated than they were perceived.Farbkonstanz beschreibt die FĂ€higkeit unseres visuellen Systems FarbeindrĂŒcke unter BeleuchtungsĂ€nderungen bestĂ€ndig zu halten. Dies ist eine außergewöhnliche Leistung, wenn man in Betracht zieht, dass in dem Lichtsignal welches das Auge erreicht Eigenschaften der Beleuchtung und der OberflĂ€chen konfundiert sind. Trotz dieser Problematik sind wir in unserem alltĂ€glichen Leben in der Lage Objekten stabile Farbnamen zuzuordnen, und damit unsere Kommunikation effizient und ökonomisch zu gestalten. Bisherige Studien zur Farbkonstanz berichten jedoch, dass Farbkonstanz nicht perfekt ist, Beleuchtungswechsel wurden nur zwischen 40-80% kompensiert. WĂ€hrend unterschiedliche Determinanten der Farbkonstanz vorgeschlagen wurden, konnte bisher keine sorgfĂ€ltig kontrollierte Studie perfekte Farbkonstanz zeigen. In der ersten Studie dieser Arbeit wurde dieser Aspekt untersucht, indem Farbkonstanz in einer hinweisreichen Umgebung unter Verwendung einer Aufgabe, die möglichst prĂ€zise unserer alltĂ€glichen Erfahrung im Umgang mit Farben wiederspiegelt, gemessen wurde. Die Versuchsteilnehmer wurden aufgefordert die Farbe eines spezifischen persönlichen Gegenstandes unter vier farbigen Beleuchtungen aus dem GedĂ€chtnis abzurufen. Unter Verwendung dieses Ansatzes konnte perfekte Farbkonstanz erreicht werden. Die zweite Studie untersuchte die Beziehung zwischen Beleuchtungs-Diskrimination und chromatischer Detektion. Die Ergebnisse von kĂŒrzlich veröffentlichten Forschungsarbeiten, welche ein Beleuchtungs-Diskriminations-Paradigma verwendeten, zeigen das diese Diskrimination in Richtung blĂ€ulicher Beleuchtung verzerrt ist. Daraus wurde geschlossen, das Farbkonstanz fĂŒr blĂ€uliche Tageslicht-Beleuchtungen optimiert ist . Da es aber nicht klar ist, ob Beleuchtungs-Diskrimination in direkter Beziehung zur Farbkonstanz steht, oder aber vielmehr auf die SensitivitĂ€t fĂŒr chromatische VerĂ€nderungen zurĂŒckfĂŒhren ist, wurden Wahrnehmungsschwellen fĂŒr Beleuchtungs-Diskrimination und chromatische Detektion fĂŒr die selben 12 Beleuchtungsfarben gemessen und verglichen. WĂ€hrend die bereits berichtete Verzerrung in Richtung der blĂ€ulichen Tageslichtbeleuchtung repliziert werden konnte, wurde ebenfalls eine hoher Zusammenhang zwischen chromatischer Detektion und Beleuchtungs-Diskrimination gefunden, welcher darauf hinweist, dass die Verzerrung in Richtung blĂ€ulicher Farben keine exklusive Eigenschaft der Beleuchtung-Diskrimination ist. AnknĂŒpfend an die zweite Studie wurde in der dritten Studie die Verteilung von Farbkonstanz ĂŒber 40 chromatische Beleuchtungen anhand von achromatischen Einstellungen und Farbbenennung untersucht. Farbkonstanz wurde auf ihren Zusammenhang zu mehreren Determinanten der Farbkonstanz ĂŒberprĂŒft, unter anderem mit Tageslichtvariationen, Farbkategorien, Beleuchtungs-Diskrimination, relationaler Farbkonstanz und metameric mismatching. In Übereinstimmung mit der zweiten Studie wurde auch fĂŒr achromatische Einstellungen eine Verzerrung in Richtung blĂ€ulicher Tageslichtbeleuchtungen gefunden. Diese Verzerrung und der Konsensus der Beleuchtungsbenennung erklĂ€rten den Großteil der Varianz der achromatischen Einstellungen, wĂ€hrend Beleuchtungs-Diskrimination nicht in direkter Verbindung zur Farbkonstanz stand. In der vierten Studie wurden Verzerrungen des FarbgedĂ€chtnisses untersucht. FrĂŒhere Studien berichten, dass Objektfarben hĂ€ufig gesĂ€ttigter erinnert werden als sie tatsĂ€chlich wahrgenommen werden. In diesen Studien wurden hĂ€ufig natĂŒrliche Objekte verwendet, die in einer Vielzahl an Farbtönen und SĂ€ttigungen existieren, wie beispielsweise Gras oder Bananen. In dem hier prĂ€sentierten Ansatz wurden Farbwahlen aus dem GedĂ€chtnis mit Farbwahlen der direkten Objektwahrnehmung fĂŒr persönliche, spezifische Objekte, die auch schon in der ersten Studie verwendet wurden, verglichen. Die Ergebnisse der vorherigen Studien konnten fĂŒr diese Objekte repliziert werden: Im Durchschnitt wurden Objektfarben gesĂ€ttigter erinnert als das Objekt im direkten Vergleich wahrgenommen wurde
    • 

    corecore