388 research outputs found

    制約付き回帰に基づく照度差ステレオ

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学准教授 山﨑 俊彦, 東京大学教授, 相澤 清晴, 東京大学教授 池内 克史, 東京大学教授 佐藤 真一, 東京大学教授 佐藤 洋一, 東京大学教授 苗村 健University of Tokyo(東京大学

    Illumination Invariant Outdoor Perception

    Get PDF
    This thesis proposes the use of a multi-modal sensor approach to achieve illumination invariance in images taken in outdoor environments. The approach is automatic in that it does not require user input for initialisation, and is not reliant on the input of atmospheric radiative transfer models. While it is common to use pixel colour and intensity as features in high level vision algorithms, their performance is severely limited by the uncontrolled lighting and complex geometric structure of outdoor scenes. The appearance of a material is dependent on the incident illumination, which can vary due to spatial and temporal factors. This variability causes identical materials to appear differently depending on their location. Illumination invariant representations of the scene can potentially improve the performance of high level vision algorithms as they allow discrimination between pixels to occur based on the underlying material characteristics. The proposed approach to obtaining illumination invariance utilises fused image and geometric data. An approximation of the outdoor illumination is used to derive per-pixel scaling factors. This has the effect of relighting the entire scene using a single illuminant that is common in terms of colour and intensity for all pixels. The approach is extended to radiometric normalisation and the multi-image scenario, meaning that the resultant dataset is both spatially and temporally illumination invariant. The proposed illumination invariance approach is evaluated on several datasets and shows that spatial and temporal invariance can be achieved without loss of spectral dimensionality. The system requires very few tuning parameters, meaning that expert knowledge is not required in order for its operation. This has potential implications for robotics and remote sensing applications where perception systems play an integral role in developing a rich understanding of the scene

    カメラ応答関数の自動校正を伴う照度差ステレオ

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 相澤 清晴, 東京大学教授 佐藤 洋一, 国立情報学研究所教授 佐藤 真一, 東京大学准教授 大石 岳史, 東京大学准教授 山崎 俊彦University of Tokyo(東京大学

    Flash Photography Enhancement via Intrinsic Relighting

    Get PDF
    We enhance photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness from the flash image. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadows. This combines the advantages of available illumination and flash photography.Singapore-MIT Alliance (SMA

    Photometric Reconstruction from Images: New Scenarios and Approaches for Uncontrolled Input Data

    Get PDF
    The changes in surface shading caused by varying illumination constitute an important cue to discern fine details and recognize the shape of textureless objects. Humans perform this task subconsciously, but it is challenging for a computer because several variables are unknown and intermix in the light distribution that actually reaches the eye or camera. In this work, we study algorithms and techniques to automatically recover the surface orientation and reflectance properties from multiple images of a scene. Photometric reconstruction techniques have been investigated for decades but are still restricted to industrial applications and research laboratories. Making these techniques work on more general, uncontrolled input without specialized capture setups has to be the next step but is not yet solved. We explore the current limits of photometric shape recovery in terms of input data and propose ways to overcome some of its restrictions. Many approaches, especially for non-Lambertian surfaces, rely on the illumination and the radiometric response function of the camera to be known. The accuracy such algorithms are able to achieve depends a lot on the quality of an a priori calibration of these parameters. We propose two techniques to estimate the position of a point light source, experimentally compare their performance with the commonly employed method, and draw conclusions which one to use in practice. We also discuss how well an absolute radiometric calibration can be performed on uncontrolled consumer images and show the application of a simple radiometric model to re-create night-time impressions from color images. A focus of this thesis is on Internet images which are an increasingly important source of data for computer vision and graphics applications. Concerning reconstructions in this setting we present novel approaches that are able to recover surface orientation from Internet webcam images. We explore two different strategies to overcome the challenges posed by this kind of input data. One technique exploits orientation consistency and matches appearance profiles on the target with a partial reconstruction of the scene. This avoids an explicit light calibration and works for any reflectance that is observed on the partial reference geometry. The other technique employs an outdoor lighting model and reflectance properties represented as parametric basis materials. It yields a richer scene representation consisting of shape and reflectance. This is very useful for the simulation of new impressions or editing operations, e.g. relighting. The proposed approach is the first that achieves such a reconstruction on webcam data. Both presentations are accompanied by evaluations on synthetic and real-world data showing qualitative and quantitative results. We also present a reconstruction approach for more controlled data in terms of the target scene. It relies on a reference object to relax a constraint common to many photometric stereo approaches: the fixed camera assumption. The proposed technique allows the camera and light source to vary freely in each image. It again avoids a light calibration step and can be applied to non-Lambertian surfaces. In summary, this thesis contributes to the calibration and to the reconstruction aspects of photometric techniques. We overcome challenges in both controlled and uncontrolled settings, with a focus on the latter. All proposed approaches are shown to operate also on non-Lambertian objects

    Optical computing for fast light transport analysis

    Full text link

    Radiometric calibration methods from image sequences

    Get PDF
    In many computer vision systems, an image of a scene is assumed to directly reflect the scene radiance. However, this is not the case for most cameras as the radiometric response function which is a mapping from the scene radiance to the image brightness is nonlinear. In addition, the exposure settings of the camera are adjusted (often in the auto-exposure mode) according to the dynamic range of the scene changing the appearance of the scene in the images. Vignetting effect which refers to the gradual fading-out of an image at points near its periphery also contributes in changing the scene appearance in images. In this dissertation, I present several algorithms to compute the radiometric properties of a camera which enable us to find the relationship between the image brightness and the scene radiance. First, I introduce an algorithm to compute the vignetting function, the response function, and the exposure values that fully explain the radiometric image formation process from a set of images of a scene taken with different and unknown exposure values. One of the key features of the proposed method is that the movement of the camera is not limited when taking the pictures whereas most existing methods limit the motion of the camera. Then I present a joint feature tracking and radiometric calibration scheme which performs an integrated radiometric calibration in contrast to previous radiometric calibration techniques which require the correspondences as an input which leads to a chicken-and-egg problem as precise tracking requires accurate radiometric calibration. By combining both into an integrated approach we solve this chicken-and-egg problem. Finally, I propose a radiometric calibration method suited for a set of images of an outdoor scene taken at a regular interval over a period of time. This type of data is a challenging problem because the illumination for each image is changing causing the exposure of the camera to change and the conventional radiometric calibration framework cannot be used for this type of data. The proposed methods are applied to radiometrically align images for seamless mosaics and 3D model textures, to create high dynamic range mosaics, and to build an adaptive stereo system
    corecore