270 research outputs found

    Multiple light source detection.

    Get PDF
    Published versio

    Integration and Segregation in Audition and Vision

    Get PDF
    Perceptual systems can improve their performance by integrating relevant perceptual information and segregating away irrelevant information. Three studies exploring perceptual integration and segregation in audition and vision are reported in this thesis. In Chapter 1, we explore the role of similarity in informational masking. In informational masking tasks, listeners detect the presence of a signal tone presented simultaneously with a random-frequency multitone masker. Detection thresholds are high in the presence of an informational masker, even though listeners should be able to ignore the masker frequencies. The informational masker\u27s effect may be due to the similarity between signal and masker components. We used a behavioral measure to demonstrate that the amount of frequency change over time could be the stimulus dimension underlying the similarity effect. In Chapter 2, we report a set of experiments on the visual system\u27s ability to discriminate distributions of luminances. The distribution of luminances can serve as a cue to the presence of multiple illuminants in a scene. We presented observers with simple achromatic scenes with patches drawn from one or two luminance distributions. Performance depended on the number of patches from the second luminance distribution, as well as knowledge of the location of these patches. Irrelevant geometric cues, which we expected to negatively affect performance, did not have an effect. An ideal observer model and a classification analysis showed that observers successfully integrated information provided by the image photometric cues. In Chapter 3, we investigated the role of photometric and geometric cues in lightness perception. We rendered achromatic scenes that were consistent with two oriented background context surfaces illuminated by a light source with a directional component. Observers made lightness matches to tabs rendered at different orientations in the scene. We manipulated the photometric cues by changing the intensity of the illumination, and the geometric cues by changing the orientation of the context surfaces. Observers\u27 matches varied with both manipulations, demonstrating that observers used both types of cues to account for the illumination in the scene. The two types of cues were found to have independent effects on the lightness matches

    Accurate Analysis of the Spatial Pattern of Reflected Light and Surface Orientations Based on Color Illumination

    Get PDF
    3D Recovery approaches require a variety of clues to obtain shape information. The shape from shading (SFS) method uses shading information in images to estimate depth maps. Although shading contains detailed information, it causes some well-known ambiguities such as convex-concave ambiguity. In this study, a system installation, using red, green, and blue illumination, and an algorithm, processing reflections on the surface, were proposed for the accurate analysis of surface orientations, and ambiguity problems. Surface orientations, erroneously predicted by six different methods, were improved by implementing the proposed system. Consequently, the correct orientation of the surface points was determined by removing the ambiguities in images taken without considering the location of illumination, and all the tested methods provided successful results using the proposed system

    Deep Reflectance Maps

    Get PDF
    Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate reflectance maps of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.Comment: project page: http://homes.esat.kuleuven.be/~krematas/DRM

    Recovering light directions and camera poses from a single sphere

    Get PDF
    LNCS v. 5302 is the conference proceedings of ECCV 2008This paper introduces a novel method for recovering both the light directions and camera poses from a single sphere. Traditional methods for estimating light directions using spheres either assume both the radius and center of the sphere being known precisely, or they depend on multiple calibrated views to recover these parameters. It will be shown in this paper that the light directions can be uniquely determined from the specular highlights observed in a single view of a sphere without knowing or recovering the exact radius and center of the sphere. Besides, if the sphere is being observed by multiple cameras, its images will uniquely define the translation vector of each camera from a common world origin centered at the sphere center. It will be shown that the relative rotations between the cameras can be recovered using two or more light directions estimated from each view. Closed form solutions for recovering the light directions and camera poses are presented, and experimental results on both synthetic and real data show the practicality of the proposed method. © 2008 Springer Berlin Heidelberg.postprintThe 10th European Conference on Computer Vision (ECCV 2008), Marseille, France, 12-18 October 2008. In Lecture Notes in Computer Science, 2008, v. 5302, pt. 1, p. 631-64

    Algorithms for the enhancement of dynamic range and colour constancy of digital images & video

    Get PDF
    One of the main objectives in digital imaging is to mimic the capabilities of the human eye, and perhaps, go beyond in certain aspects. However, the human visual system is so versatile, complex, and only partially understood that no up-to-date imaging technology has been able to accurately reproduce the capabilities of the it. The extraordinary capabilities of the human eye have become a crucial shortcoming in digital imaging, since digital photography, video recording, and computer vision applications have continued to demand more realistic and accurate imaging reproduction and analytic capabilities. Over decades, researchers have tried to solve the colour constancy problem, as well as extending the dynamic range of digital imaging devices by proposing a number of algorithms and instrumentation approaches. Nevertheless, no unique solution has been identified; this is partially due to the wide range of computer vision applications that require colour constancy and high dynamic range imaging, and the complexity of the human visual system to achieve effective colour constancy and dynamic range capabilities. The aim of the research presented in this thesis is to enhance the overall image quality within an image signal processor of digital cameras by achieving colour constancy and extending dynamic range capabilities. This is achieved by developing a set of advanced image-processing algorithms that are robust to a number of practical challenges and feasible to be implemented within an image signal processor used in consumer electronics imaging devises. The experiments conducted in this research show that the proposed algorithms supersede state-of-the-art methods in the fields of dynamic range and colour constancy. Moreover, this unique set of image processing algorithms show that if they are used within an image signal processor, they enable digital camera devices to mimic the human visual system s dynamic range and colour constancy capabilities; the ultimate goal of any state-of-the-art technique, or commercial imaging device
    • …
    corecore