97 research outputs found

    Affordable spectral measurements of translucent materials

    Get PDF
    We present a spectral measurement approach for the bulk optical properties of translucent materials using only low-cost components. We focus on the translucent inks used in full-color 3D printing, and develop a technique with a high spectral resolution, which is important for accurate color reproduction. We enable this by developing a new acquisition technique for the three unknown material parameters, namely, the absorption and scattering coefficients, and its phase function anisotropy factor, that only requires three point measurements with a spectrometer. In essence, our technique is based on us finding a three-dimensional appearance map, computed using Monte Carlo rendering, that allows the conversion between the three observables and the material parameters. Our measurement setup works without laboratory equipment or expensive optical components. We validate our results on a 3D printed color checker with various ink combinations. Our work paves a path for more accurate appearance modeling and fabrication even for low-budget environments or affordable embedding into other devices

    Spectral Ray Tracing for Generation of Spatial Color Constancy Training Data

    Get PDF
    Computational color constancy is a fundamental step in digital cameras that estimates the chromaticity of illumination. Most of automatic white balance (AWB) algorithms that perform computational color constancy assume that there is a single illuminant in the scene. This widely-known assumption is frequently violated in the real world. It could be argued that the main reason for the assumption of single illuminant comes from the limited amount of available mixed illuminant datasets and the laborious annotation process. Annotation of mixed illuminated images is orders of magnitude more laborious compared to a single illuminant case, due to the spatial complexity that requires pixel-wise ground truth illumination chromaticity in various ratios of existing illuminants. Spectral ray tracing is a 3D rendering method to create physically realistic images and animations using the spectral representations of materials and light sources rather than a trichromatic representation such as red-green-blue (RGB). In this thesis, this physically correct image signal generation method is used in creation of spatially varying mixed illuminated image dataset with pixel-wise ground truth illumination chromaticity. In complex 3D scenes, materials are defined based on a database of real world spectral reflectance measurements and light sources are defined based on the spectral power distribution definitions that have been released by the International Commission on Illumination (CIE). Rendering is done by using Blender Cycles rendering engine in the visible spectrum wavelengths from 395nm to 705nm with 5nm equal bins resulting in 63 channel full-spectrum image. The resulting full-spectrum images can be turned into the raw response of any camera as long as the spectral sensitivity of the camera module is known. This is a big advantage of spectral ray tracing since color constancy is mostly camera module-dependent. Pixel-wise white balance gain is calculated through the linear average of illuminant chromaticities depending on their contribution to the mixed illuminated raw image. The raw image signal and pixel-wise white balance gain are fundamentally needed in spatial color constancy dataset. This study implements an image generation pipeline that starts from the spectral definitions of illuminants and materials and ends with an sRGB image created from a 3D scene. 6 different 3D Blender scenes are created, each having 7 different virtual cameras located throughout the scene. 406 single illuminated and 1015 spatially varying mixed illuminated images are created including their pixel-wise ground truth illumination chromaticity. Created dataset can be used to improve mixed illumination color constancy algorithms and paves the way for further research and testing in the field

    Color Constancy Adjustment using Sub-blocks of the Image

    Get PDF
    Extreme presence of the source light in digital images decreases the performance of many image processing algorithms, such as video analytics, object tracking and image segmentation. This paper presents a color constancy adjustment technique, which lessens the impact of large unvarying color areas of the image on the performance of the existing statistical based color correction algorithms. The proposed algorithm splits the input image into several non-overlapping blocks. It uses the Average Absolute Difference (AAD) value of each block’s color component as a measure to determine if the block has adequate color information to contribute to the color adjustment of the whole image. It is shown through experiments that by excluding the unvarying color areas of the image, the performances of the existing statistical-based color constancy methods are significantly improved. The experimental results of four benchmark image datasets validate that the proposed framework using Gray World, Max-RGB and Shades of Gray statistics-based methods’ images have significantly higher subjective and competitive objective color constancy than those of the existing and the state-of-the-art methods’ images

    Programmable Image-Based Light Capture for Previsualization

    Get PDF
    Previsualization is a class of techniques for creating approximate previews of a movie sequence in order to visualize a scene prior to shooting it on the set. Often these techniques are used to convey the artistic direction of the story in terms of cinematic elements, such as camera movement, angle, lighting, dialogue, and character motion. Essentially, a movie director uses previsualization (previs) to convey movie visuals as he sees them in his minds-eye . Traditional methods for previs include hand-drawn sketches, Storyboards, scaled models, and photographs, which are created by artists to convey how a scene or character might look or move. A recent trend has been to use 3D graphics applications such as video game engines to perform previs, which is called 3D previs. This type of previs is generally used prior to shooting a scene in order to choreograph camera or character movements. To visualize a scene while being recorded on-set, directors and cinematographers use a technique called On-set previs, which provides a real-time view with little to no processing. Other types of previs, such as Technical previs, emphasize accurately capturing scene properties but lack any interactive manipulation and are usually employed by visual effects crews and not for cinematographers or directors. This dissertation\u27s focus is on creating a new method for interactive visualization that will automatically capture the on-set lighting and provide interactive manipulation of cinematic elements to facilitate the movie maker\u27s artistic expression, validate cinematic choices, and provide guidance to production crews. Our method will overcome the drawbacks of the all previous previs methods by combining photorealistic rendering with accurately captured scene details, which is interactively displayed on a mobile capture and rendering platform. This dissertation describes a new hardware and software previs framework that enables interactive visualization of on-set post-production elements. A three-tiered framework, which is the main contribution of this dissertation is; 1) a novel programmable camera architecture that provides programmability to low-level features and a visual programming interface, 2) new algorithms that analyzes and decomposes the scene photometrically, and 3) a previs interface that leverages the previous to perform interactive rendering and manipulation of the photometric and computer generated elements. For this dissertation we implemented a programmable camera with a novel visual programming interface. We developed the photometric theory and implementation of our novel relighting technique called Symmetric lighting, which can be used to relight a scene with multiple illuminants with respect to color, intensity and location on our programmable camera. We analyzed the performance of Symmetric lighting on synthetic and real scenes to evaluate the benefits and limitations with respect to the reflectance composition of the scene and the number and color of lights within the scene. We found that, since our method is based on a Lambertian reflectance assumption, our method works well under this assumption but that scenes with high amounts of specular reflections can have higher errors in terms of relighting accuracy and additional steps are required to mitigate this limitation. Also, scenes which contain lights whose colors are a too similar can lead to degenerate cases in terms of relighting. Despite these limitations, an important contribution of our work is that Symmetric lighting can also be leveraged as a solution for performing multi-illuminant white balancing and light color estimation within a scene with multiple illuminants without limits on the color range or number of lights. We compared our method to other white balance methods and show that our method is superior when at least one of the light colors is known a priori

    The Hyper-log-chromaticity space for illuminant invariance

    Get PDF
    Variation in illumination conditions through a scene is a common issue for classification, segmentation and recognition applications. Traffic monitoring and driver assistance systems have difficulty with the changing illumination conditions at night, throughout the day, with multiple sources (especially at night) and in the presence of shadows. The majority of existing algorithms for color constancy or shadow detection rely on multiple frames for comparison or to build a background model. The proposed approach uses a novel color space inspired by the Log-Chromaticity space and modifies the bilateral filter to equalize illumination across objects using a single frame. Neighboring pixels of the same color, but of different brightness, are assumed to be of the same object/material. The utility of the algorithm is studied over day and night simulated scenes of varying complexity. The objective is not to provide a product for visual inspection but rather an alternate image with fewer illumination related issues for other algorithms to process. The usefulness of the filter is demonstrated by applying two simple classifiers and comparing the class statistics. The hyper-log-chromaticity image and the filtered image both improve the quality of the classification relative to the un-processed image

    Computational Aspects of Color Constancy

    Get PDF
    We examine color constancy algorithms based on finite-dimensional linear models of surface reflectance and illumination from a computational point of view. It is shown that, within finite dimensional models, formulation and solution of color constancy are determined by the choice of basis functions, the number of spectral receptors and the spatial constraints. We analyze some algorithms with examples, and limitations of algorithms for applications on real images

    Integration and Segregation in Audition and Vision

    Get PDF
    Perceptual systems can improve their performance by integrating relevant perceptual information and segregating away irrelevant information. Three studies exploring perceptual integration and segregation in audition and vision are reported in this thesis. In Chapter 1, we explore the role of similarity in informational masking. In informational masking tasks, listeners detect the presence of a signal tone presented simultaneously with a random-frequency multitone masker. Detection thresholds are high in the presence of an informational masker, even though listeners should be able to ignore the masker frequencies. The informational masker\u27s effect may be due to the similarity between signal and masker components. We used a behavioral measure to demonstrate that the amount of frequency change over time could be the stimulus dimension underlying the similarity effect. In Chapter 2, we report a set of experiments on the visual system\u27s ability to discriminate distributions of luminances. The distribution of luminances can serve as a cue to the presence of multiple illuminants in a scene. We presented observers with simple achromatic scenes with patches drawn from one or two luminance distributions. Performance depended on the number of patches from the second luminance distribution, as well as knowledge of the location of these patches. Irrelevant geometric cues, which we expected to negatively affect performance, did not have an effect. An ideal observer model and a classification analysis showed that observers successfully integrated information provided by the image photometric cues. In Chapter 3, we investigated the role of photometric and geometric cues in lightness perception. We rendered achromatic scenes that were consistent with two oriented background context surfaces illuminated by a light source with a directional component. Observers made lightness matches to tabs rendered at different orientations in the scene. We manipulated the photometric cues by changing the intensity of the illumination, and the geometric cues by changing the orientation of the context surfaces. Observers\u27 matches varied with both manipulations, demonstrating that observers used both types of cues to account for the illumination in the scene. The two types of cues were found to have independent effects on the lightness matches

    Color image-based shape reconstruction of multi-color objects under general illumination conditions

    Get PDF
    Humans have the ability to infer the surface reflectance properties and three-dimensional shape of objects from two-dimensional photographs under simple and complex illumination fields. Unfortunately, the reported algorithms in the area of shape reconstruction require a number of simplifying assumptions that result in poor performance in uncontrolled imaging environments. Of all these simplifications, the assumptions of non-constant surface reflectance, globally consistent illumination, and multiple surface views are the most likely to be contradicted in typical environments. In this dissertation, three automatic algorithms for the recovery of surface shape given non-constant reflectance using a single-color image acquired are presented. In addition, a novel method for the identification and removal of shadows from simple scenes is discussed.In existing shape reconstruction algorithms for surfaces of constant reflectance, constraints based on the assumed smoothness of the objects are not explicitly used. Through Explicit incorporation of surface smoothness properties, the algorithms presented in this work are able to overcome the limitations of the previously reported algorithms and accurately estimate shape in the presence of varying reflectance. The three techniques developed for recovering the shape of multi-color surfaces differ in the method through which they exploit the surface smoothness property. They are summarized below:• Surface Recovery using Pre-Segmentation - this algorithm pre-segments the image into distinct color regions and employs smoothness constraints at the color-change boundaries to constrain and recover surface shape. This technique is computationally efficient and works well for images with distinct color regions, but does not perform well in the presence of high-frequency color textures that are difficult to segment.iv• Surface Recovery via Normal Propagation - this approach utilizes local gradient information to propagate a smooth surface solution from points of known orientation. While solution propagation eliminates the need for color-based image segmentation, the quality of the recovered surface can be degraded by high degrees of image noise due to reliance on local information.• Surface Recovery by Global Variational Optimization - this algorithm utilizes a normal gradient smoothness constraint in a non-linear optimization strategy, to iteratively solve for the globally optimal object surface. Because of its global nature, this approach is much less sensitive to noise than the normal propagation is, but requires significantly more computational resources.Results acquired through application of the above algorithms to various synthetic and real image data sets are presented for qualitative evaluation. A quantitative analysis of the algorithms is also discussed for quadratic shapes. The robustness of the three approaches to factors such as segmentation error and random image noise is also explored

    A full photometric and geometric model for attached webcam/matte screen devices

    Get PDF
    International audienceWe present a thorough photometric and geometric study of the multimedia devices composed of both a matte screen and an attached camera, where it is shown that the light emitted by an image displayed on the monitor can be expressed in closed-form at any point facing the screen, and that the geometric calibration of the camera attached to the screen can be simplified by introducing simple geometric constraints. These theoretical contributions are experimentally validated in a photometric stereo application with extended sources, where a colored scene is reconstructed while watching a collection of graylevel images displayed on the screen, providing a cheap and entertaining way to acquire realistic 3D-representations for, e.g., augmented reality
    • …
    corecore