560 research outputs found

    Accidental Light Probes

    Full text link
    Recovering lighting in a scene from a single image is a fundamental problem in computer vision. While a mirror ball light probe can capture omnidirectional lighting, light probes are generally unavailable in everyday images. In this work, we study recovering lighting from accidental light probes (ALPs) -- common, shiny objects like Coke cans, which often accidentally appear in daily scenes. We propose a physically-based approach to model ALPs and estimate lighting from their appearances in single images. The main idea is to model the appearance of ALPs by photogrammetrically principled shading and to invert this process via differentiable rendering to recover incidental illumination. We demonstrate that we can put an ALP into a scene to allow high-fidelity lighting estimation. Our model can also recover lighting for existing images that happen to contain an ALP.Comment: CVPR2023. Project website: https://kovenyu.com/ALP

    Appearance-design interfaces and tools for computer cinematography: Evaluation and application

    Get PDF
    We define appearance design as the creation and editing of scene content such as lighting and surface materials in computer graphics. The appearance design process takes a significant amount of time relative to other production tasks and poses difficult artistic challenges. Many user interfaces have been proposed to make appearance design faster, easier, and more expressive, but no formal validation of these interfaces had been published prior to our body of work. With a focus on novice users, we present a series of investigations into the strengths and weaknesses of various appearance design user interfaces. In particular, we develop an experimental methodology for the evaluation of representative user interface paradigms in the areas of lighting and material design. We conduct three user studies having subjects perform design tasks under controlled conditions. In these studies, we discover new insight into the effectiveness of each paradigm for novices measured by objective performance as well as subjective feedback. We also offer observations on common workflow and capabilities of novice users in these domains. We use the results of our lighting study to develop a new representation for artistic control of lighting, where light travels along nonlinear paths

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    An empirically derived system for high-speed shadow rendering

    Get PDF
    Shadows have captivated humanity since the dawn of time; with the current age being no exception – shadows are core to realism and ambience, be it to invoke a classic Baroque interplay of lights, darks and colours as the case in Rembrandt van Rijn’s Militia Company of Captain Frans Banning Cocq or to create a sense of mystery as found in film noir and expressionist cinematography. Shadows, in this traditional sense, are regions of blocked light – the combined effect of placing an object between a light source and surface. This dissertation focuses on real-time shadow generation as a subset of 3D computer graphics. Its main focus is the critical analysis of numerous real-time shadow rendering algorithms and the construction of an empirically derived system for the high-speed rendering of shadows. This critical analysis allows us to assess the relationship between shadow rendering quality and performance. It also allows for the isolation of key algorithmic weaknesses and possible bottleneck areas. Focusing on these bottleneck areas, we investigate several possibilities of improving the performance and quality of shadow rendering; both on a hardware and software level. Primary performance benefits are seen through effective culling, clipping, the use of hardware extensions and by managing the polygonal complexity and silhouette detection of shadow casting meshes. Additional performance gains are achieved by combining the depth-fail stencil shadow volume algorithm with dynamic spatial subdivision. Using this performance data gathered during the analysis of various shadow rendering algorithms, we are able to define a fuzzy logic-based expert system to control the real-time selection of shadow rendering algorithms based on environmental conditions. This system ensures the following: nearby shadows are always of high-quality, distant shadows are, under certain conditions, rendered at a lower quality and the frames per second rendering performance is always maximised.Dissertation (MSc)--University of Pretoria, 2009.Computer Scienceunrestricte

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    Augmented Reality Framework and Demonstrator

    Get PDF
    Augmenting the real-world with digital information can improve the human perception in many ways. In recent years, a large amount of research has been conducted in the field of Augmented Reality (AR) and related technologies. Subsequently, different AR systems have been developed for the use in different areas such as medical, education, military, and entertainment. This thesis investigates augmented reality systems and challenges of realistic rendering in AR environment. Besides, an object-oriented framework, named ThirdEye, has been designed and implemented in order to facilitate the process of developing augmented reality applications for experimental purposes. This framework has been developed in two versions for desktop and mobile platforms. With ThirdEye, it is easier to port the same AR demo application to both platforms, manage and modify all AR demo application components, compared to the various existing libraries. Each feature that the ThirdEye framework includes, may be provided by other existing libraries separately but this framework provides those features in an easy-to-use manner. In order to evaluate usability and performance of ThirdEye and also for demonstrating challenges of simulating some of the light effects in the AR environment, such as shadow and refraction, several AR demos were developed using this framework. Performance of the implemented AR demos were benchmarked and bottlenecks of different components of the framework were investigated. This thesis explains the structure of the ThirdEye framework, its main components and the employed technologies and the Software Development Kits (SDKs). Furthermore, by using a simple demo, it is explained how this framework can be utilized to develop an AR application step by step. Lastly, several ideas for future development are described

    Extraction and Integration of Physical Illumination in Dynamic Augmented Reality Environments

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)Although current augmented, virtual, and mixed reality (AR/VR/MR) systems are facing advanced and immersive experience in the entertainment industry with countless media forms. Theses systems suffer a lack of correct direct and indirect illumination modeling where the virtual objects render with the same lighting condition as the real environment. Some systems are using baked GI, pre-recorded textures, and light probes that are mostly accomplished offline to compensate for precomputed real-time global illumination (GI). Thus, illumination information can be extracted from the physical scene for interactively rendering the virtual objects into the real world which produces a more realistic final scene in real-time. This work approaches the problem of visual coherence in AR by proposing a system that detects the real-world lighting conditions in dynamic scenes, then uses the extracted illumination information to render the objects added to the scene. The system covers several major components to achieve a more realistic augmented reality outcome. First, the detection of the incident light (direct illumination) from the physical scene with the use of computer vision techniques based on the topological structural analysis of 2D images using a live-feed 360-degree camera instrumented on an AR device that captures the entire radiance map. Also, the physics-based light polarization eliminates or reduces false-positive lights such as white surfaces, reflections, or glare which negatively affect the light detection process. Second, the simulation of the reflected light (indirect illumination) that bounce between the real-world surfaces to be rendered into the virtual objects and reflect their existence in the virtual world. Third, defining the shading characteristic/properties of the virtual object to depict the correct lighting assets with a suitable shadow casting. Fourth, the geometric properties of real-scene including plane detection, 3D surface reconstruction, and simple meshing are incorporated with the virtual scene for more realistic depth interactions between the real and virtual objects. These components are developed methods which assumed to be working simultaneously in real-time for photo-realistic AR. The system is tested with several lighting conditions to evaluate the accuracy of the results based on the error incurred between the real/virtual objects casting shadow and interactions. For system efficiency, the rendering time is compared with previous works and research. Further evaluation of human perception is conducted through a user study. The overall performance of the system is investigated to reduce the cost to a minimum
    corecore