78 research outputs found

    The Computation of Surface Lightness in Simple and Complex Scenes

    Get PDF
    The present thesis examined how reflectance properties and the complexity of surface mesostructure (small-scale surface relief) influence perceived lightness in centresurround displays. Chapters 2 and 3 evaluated the role of surface relief, gloss, and interreflections on lightness constancy, which was examined across changes in background albedo and illumination level. For surfaces with visible mesostructure (“rocky” surfaces), lightness constancy across changes in background albedo was better for targets embedded in glossy versus matte surfaces. However, this improved lightness constancy for gloss was not observed when illumination varied. Control experiments compared the matte and glossy rocky surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum. Lightness constancy was improved for rocky glossy displays over the histogram-matched displays, but not compared to phase-scrambled variants of these images with equated power spectrums. The results were similar for surfaces rendered with 1, 2, 3 and 4 interreflections. These results suggest that lightness perception in complex centre-surround displays can be explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity. The results for surfaces without surface relief (“homogeneous” surfaces) differed qualitatively to rocky surfaces, exhibiting abrupt steps in perceived lightness at points at which the targets transitioned from being increments to decrements. Chapter 4 examined whether homogeneous displays evoke more complex mid-level representations similar to conditions of transparency. Matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, transmittance was only varied to match the contrast of targets against homogeneous surrounds, and not to explicitly match the amount of transparency perceived in the displays. The results suggest perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the mid-level property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays

    The Computation of Surface Lightness in Simple and Complex Scenes

    Get PDF
    The present thesis examined how reflectance properties and the complexity of surface mesostructure (small-scale surface relief) influence perceived lightness in centresurround displays. Chapters 2 and 3 evaluated the role of surface relief, gloss, and interreflections on lightness constancy, which was examined across changes in background albedo and illumination level. For surfaces with visible mesostructure (“rocky” surfaces), lightness constancy across changes in background albedo was better for targets embedded in glossy versus matte surfaces. However, this improved lightness constancy for gloss was not observed when illumination varied. Control experiments compared the matte and glossy rocky surrounds to two control displays, which matched either pixel histograms or a phase-scrambled power spectrum. Lightness constancy was improved for rocky glossy displays over the histogram-matched displays, but not compared to phase-scrambled variants of these images with equated power spectrums. The results were similar for surfaces rendered with 1, 2, 3 and 4 interreflections. These results suggest that lightness perception in complex centre-surround displays can be explained by the distribution of contrast across space and scale, independently of explicit information about surface shading or specularity. The results for surfaces without surface relief (“homogeneous” surfaces) differed qualitatively to rocky surfaces, exhibiting abrupt steps in perceived lightness at points at which the targets transitioned from being increments to decrements. Chapter 4 examined whether homogeneous displays evoke more complex mid-level representations similar to conditions of transparency. Matching target lightness in a homogeneous display to that in a textured or rocky display required varying both lightness and transmittance of the test patch on the textured display to obtain the most satisfactory matches. However, transmittance was only varied to match the contrast of targets against homogeneous surrounds, and not to explicitly match the amount of transparency perceived in the displays. The results suggest perceived target-surround edge contrast differs between homogeneous and textured displays. Varying the mid-level property of transparency in textured displays provides a natural means for equating both target lightness and the unique appearance of the edge contrast in homogeneous displays

    Interactive global illumination on the CPU

    Get PDF
    Computing realistic physically-based global illumination in real-time remains one of the major goals in the fields of rendering and visualisation; one that has not yet been achieved due to its inherent computational complexity. This thesis focuses on CPU-based interactive global illumination approaches with an aim to develop generalisable hardware-agnostic algorithms. Interactive ray tracing is reliant on spatial and cache coherency to achieve interactive rates which conflicts with needs of global illumination solutions which require a large number of incoherent secondary rays to be computed. Methods that reduce the total number of rays that need to be processed, such as Selective rendering, were investigated to determine how best they can be utilised. The impact that selective rendering has on interactive ray tracing was analysed and quantified and two novel global illumination algorithms were developed, with the structured methodology used presented as a framework. Adaptive Inter- leaved Sampling, is a generalisable approach that combines interleaved sampling with an adaptive approach, which uses efficient component-specific adaptive guidance methods to drive the computation. Results of up to 11 frames per second were demonstrated for multiple components including participating media. Temporal Instant Caching, is a caching scheme for accelerating the computation of diffuse interreflections to interactive rates. This approach achieved frame rates exceeding 9 frames per second for the majority of scenes. Validation of the results for both approaches showed little perceptual difference when comparing against a gold-standard path-traced image. Further research into caching led to the development of a new wait-free data access control mechanism for sharing the irradiance cache among multiple rendering threads on a shared memory parallel system. By not serialising accesses to the shared data structure the irradiance values were shared among all the threads without any overhead or contention, when reading and writing simultaneously. This new approach achieved efficiencies between 77% and 92% for 8 threads when calculating static images and animations. This work demonstrates that, due to the flexibility of the CPU, CPU-based algorithms remain a valid and competitive choice for achieving global illumination interactively, and an alternative to the generally brute-force GPU-centric algorithms

    Stereoscopic viewing, roughness and gloss perception

    Get PDF
    This thesis presents a novel investigation into the effect stereoscopic vision has upon the strength of perceived gloss on rough surfaces. We demonstrate that in certain cases disparity is necessary for accurate judgements of gloss strength. We first detail the process we used to create a two-level taxonomy of property terms, which helped to inform the early direction of this work, before presenting the eleven words which we found categorised the property space. This shaped careful examination of the relevant literature, leading us to conclude that most studies into roughness, gloss, and stereoscopic vision have been performed with unrealistic surfaces and physically inaccurate lighting models. To improve on the stimuli used in these earlier studies, advanced offline rendering techniques were employed to create images of complex, naturalistic, and realistically glossy 1/fβ noise surfaces. These images were rendered using multi-bounce path tracing to account for interreflections and soft shadows, with a reflectance model which observed all common light phenomena. Using these images in a series of psychophysical experiments, we first show that random phase spectra can alter the strength of perceived gloss. These results are presented alongside pairs of the surfaces tested which have similar levels of perceptual gloss. These surface pairs are then used to conclude that naïve observers consistently underestimate how glossy a surface is without the correct surface and highlight disparity, but only on the rougher surfaces presented

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Defining Reality in Virtual Reality: Exploring Visual Appearance and Spatial Experience Focusing on Colour

    Get PDF
    Today, different actors in the design process have communication difficulties in visualizing and predictinghow the not yet built environment will be experienced. Visually believable virtual environments (VEs) can make it easier for architects, users and clients to participate in the planning process. This thesis deals with the difficulties of translating reality into digital counterparts, focusing on visual appearance(particularly colour) and spatial experience. The goal is to develop knowledge of how differentaspects of a VE, especially light and colour, affect the spatial experience; and thus to contribute to a better understanding of the prerequisites for visualizing believable spatial VR-models. The main aims are to 1) identify problems and test solutions for simulating realistic spatial colour and light in VR; and 2) develop knowledge of the spatial conditions in VR required to convey believable experiences; and evaluate different ways of visualizing spatial experiences. The studies are conducted from an architecturalperspective; i.e. the whole of the spatial settings is considered, which is a complex task. One important contribution therefore concerns the methodology. Different approaches were used: 1) a literature review of relevant research areas; 2) a comparison between existing studies on colour appearance in 2D vs 3D; 3) a comparison between a real room and different VR-simulations; 4) elaborationswith an algorithm for colour correction; 5) reflections in action on a demonstrator for correct appearance and experience; and 6) an evaluation of texture-styles with non-photorealistic expressions. The results showed various problems related to the translation and comparison of reality to VR. The studies pointed out the significance of inter-reflections; colour variations; perceived colour of light and shadowing for the visual appearance in real rooms. Some differences in VR were connected to arbitrary parameter settings in the software; heavily simplified chromatic information on illumination; and incorrectinter-reflections. The models were experienced differently depending on the application. Various spatial differences between reality and VR could be solved by visual compensation. The study with texture-styles pointed out the significance of varying visual expressions in VR-models

    Augmenting Visual Feedback Using Sensory Substitution

    Get PDF
    Direct interaction in virtual environments can be realized using relatively simple hardware, such as standard webcams and monitors. The result is a large gap between the stimuli existing in real-world interactions and those provided in the virtual environment. This leads to reduced efficiency and effectiveness when performing tasks. Conceivably these missing stimuli might be supplied through a visual modality, using sensory substitution. This work suggests a display technique that attempts to usefully and non-detrimentally employ sensory substitution to display proximity, tactile, and force information. We solve three problems with existing feedback mechanisms. Attempting to add information to existing visuals, we need to balance: not occluding the existing visual output; not causing the user to look away from the existing visual output, or otherwise distracting the user; and displaying as much new information as possible. We assume the user interacts with a virtual environment consisting of a manually controlled probe and a set of surfaces. Our solution is a pseudo-shadow: a shadow-like projection of the user's probe onto the surface being explored or manipulated. Instead of drawing the probe, we only draw the pseudo-shadow, and use it as a canvas on which to add other information. Static information is displayed by varying the parameters of a procedural texture rendered in the pseudo-shadow. The probe velocity and probe-surface distance modify this texture to convey dynamic information. Much of the computation occurs on the GPU, so the pseudo-shadow renders quickly enough for real-time interaction. As a result, this work contains three contributions: a simple collision detection and handling mechanism that can generalize to distance-based force fields; a way to display content during probe-surface interaction that reduces occlusion and spatial distraction; and a way to visually convey small-scale tactile texture

    Mutual Illumination Photometric Stereo

    Get PDF
    Many techniques have been developed in computer vision to recover three-dimensional shape from two-dimensional images. These techniques impose various combinations of assumptions/restrictions of conditions to produce a representation of shape (e.g. surface normals or a height map). Although great progress has been made it is a problem which remains far from solved. In this thesis we propose a new approach to shape recovery - namely `mutual illumination photometric stereo'. We exploit the presence of colourful mutual illumination in an environment to recover the shape of objects from a single image

    Automated inverse-rendering techniques for realistic 3D artefact compositing in 2D photographs

    Get PDF
    PhD ThesisThe process of acquiring images of a scene and modifying the defining structural features of the scene through the insertion of artefacts is known in literature as compositing. The process can take effect in the 2D domain (where the artefact originates from a 2D image and is inserted into a 2D image), or in the 3D domain (the artefact is defined as a dense 3D triangulated mesh, with textures describing its material properties). Compositing originated as a solution to enhancing, repairing, and more broadly editing photographs and video data alike in the film industry as part of the post-production stage. This is generally thought of as carrying out operations in a 2D domain (a single image with a known width, height, and colour data). The operations involved are sequential and entail separating the foreground from the background (matting), or identifying features from contour (feature matching and segmentation) with the purpose of introducing new data in the original. Since then, compositing techniques have gained more traction in the emerging fields of Mixed Reality (MR), Augmented Reality (AR), robotics and machine vision (scene understanding, scene reconstruction, autonomous navigation). When focusing on the 3D domain, compositing can be translated into a pipeline 1 - the incipient stage acquires the scene data, which then undergoes a number of processing steps aimed at inferring structural properties that ultimately allow for the placement of 3D artefacts anywhere within the scene, rendering a plausible and consistent result with regard to the physical properties of the initial input. This generic approach becomes challenging in the absence of user annotation and labelling of scene geometry, light sources and their respective magnitude and orientation, as well as a clear object segmentation and knowledge of surface properties. A single image, a stereo pair, or even a short image stream may not hold enough information regarding the shape or illumination of the scene, however, increasing the input data will only incur an extensive time penalty which is an established challenge in the field. Recent state-of-the-art methods address the difficulty of inference in the absence of 1In the present document, the term pipeline refers to a software solution formed of stand-alone modules or stages. It implies that the flow of execution runs in a single direction, and that each module has the potential to be used on its own as part of other solutions. Moreover, each module is assumed to take an input set and output data for the following stage, where each module addresses a single type of problem only. data, nonetheless, they do not attempt to solve the challenge of compositing artefacts between existing scene geometry, or cater for the inclusion of new geometry behind complex surface materials such as translucent glass or in front of reflective surfaces. The present work focuses on the compositing in the 3D domain and brings forth a software framework 2 that contributes solutions to a number of challenges encountered in the field, including the ability to render physically-accurate soft shadows in the absence of user annotate scene properties or RGB-D data. Another contribution consists in the timely manner in which the framework achieves a believable result compared to the other compositing methods which rely on offline rendering. The availability of proprietary hardware and user expertise are two of the main factors that are not required in order to achieve a fast and reliable results within the current framework

    Color in interior spaces

    Get PDF
    Ankara : The Department of Interior Architecture and Environmental Design and the Institute of Fine Arts of Bilkent University, 1992.Thesis (Master's) -- -Bilkent University, 1992.Includes bibliographical references leaves 95-99.Color can be approached from different perspectives and disciplines such as, biology, theory, technology, and psychology. This thesis discusses color, from the stand point of interior spaces, which to some extent involves most of these disciplines. The aim of this study is to review the research on environmental color. It will summarize what is empirically known about our responses to color, how, if at all, color ) influences the perception of an interior, how it is effected under different light sources or if it is effected from each other. The effects of light on color will try to be verified by using a 'color and light simulator'.Demirörs, Müge BozbeyliM.S
    corecore