121 research outputs found

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    A Novel Framework for Highlight Reflectance Transformation Imaging

    Get PDF
    We propose a novel pipeline and related software tools for processing the multi-light image collections (MLICs) acquired in different application contexts to obtain shape and appearance information of captured surfaces, as well as to derive compact relightable representations of them. Our pipeline extends the popular Highlight Reflectance Transformation Imaging (H-RTI) framework, which is widely used in the Cultural Heritage domain. We support, in particular, perspective camera modeling, per-pixel interpolated light direction estimation, as well as light normalization correcting vignetting and uneven non-directional illumination. Furthermore, we propose two novel easy-to-use software tools to simplify all processing steps. The tools, in addition to support easy processing and encoding of pixel data, implement a variety of visualizations, as well as multiple reflectance-model-fitting options. Experimental tests on synthetic and real-world MLICs demonstrate the usefulness of the novel algorithmic framework and the potential benefits of the proposed tools for end-user applications.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091DSURF project (PRIN 2015) funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    Multispectral RTI Analysis of Heterogeneous Artworks

    Get PDF
    We propose a novel multi-spectral reflectance transformation imaging (MS-RTI) framework for the acquisition and direct analysis of the reflectance behavior of heterogeneous artworks. Starting from free-form acquisitions, we compute per-pixel calibrated multi-spectral appearance profiles, which associate a reflectance value to each sampled light direction and frequency. Visualization, relighting, and feature extraction is performed directly on appearance profile data, applying scattered data interpolation based on Radial Basis Functions to estimate per-pixel reflectance from novel lighting directions. We demonstrate how the proposed solution can convey more insights on the object materials and geometric details compared to classical multi-light methods that rely on low-frequency analytical model fitting eventually mixed with a separate handling of high-frequency components, hence requiring constraining priors on material behavior. The flexibility of our approach is illustrated on two heterogeneous case studies, a painting and a dark shiny metallic sculpture, that showcase feature extraction, visualization, and analysis of high-frequency properties of artworks using multi-light, multi-spectral (Visible, UV and IR) acquisitions.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091the DSURF (PRIN 2015) project funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    The delta radiance field

    Get PDF
    The wide availability of mobile devices capable of computing high fidelity graphics in real-time has sparked a renewed interest in the development and research of Augmented Reality applications. Within the large spectrum of mixed real and virtual elements one specific area is dedicated to produce realistic augmentations with the aim of presenting virtual copies of real existing objects or soon to be produced products. Surprisingly though, the current state of this area leaves much to be desired: Augmenting objects in current systems are often presented without any reconstructed lighting whatsoever and therefore transfer an impression of being glued over a camera image rather than augmenting reality. In light of the advances in the movie industry, which has handled cases of mixed realities from one extreme end to another, it is a legitimate question to ask why such advances did not fully reflect onto Augmented Reality simulations as well. Generally understood to be real-time applications which reconstruct the spatial relation of real world elements and virtual objects, Augmented Reality has to deal with several uncertainties. Among them, unknown illumination and real scene conditions are the most important. Any kind of reconstruction of real world properties in an ad-hoc manner must likewise be incorporated into an algorithm responsible for shading virtual objects and transferring virtual light to real surfaces in an ad-hoc fashion. The immersiveness of an Augmented Reality simulation is, next to its realism and accuracy, primarily dependent on its responsiveness. Any computation affecting the final image must be computed in real-time. This condition rules out many of the methods used for movie production. The remaining real-time options face three problems: The shading of virtual surfaces under real natural illumination, the relighting of real surfaces according to the change in illumination due to the introduction of a new object into a scene, and the believable global interaction of real and virtual light. This dissertation presents contributions to answer the problems at hand. Current state-of-the-art methods build on Differential Rendering techniques to fuse global illumination algorithms into AR environments. This simple approach has a computationally costly downside, which limits the options for believable light transfer even further. This dissertation explores new shading and relighting algorithms built on a mathematical foundation replacing Differential Rendering. The result not only presents a more efficient competitor to the current state-of-the-art in global illumination relighting, but also advances the field with the ability to simulate effects which have not been demonstrated by contemporary publications until now

    Acquisition of Surface Light Fields from Videos

    Get PDF
    La tesi presenta un nuovo approccio per la stima di Surface Light Field di oggetti reali, a partire da sequenze video acquisite in condizioni di illuminazione fisse e non controllate. Il metodo proposto si basa sulla separazione delle due componenti principali dell'apparenza superficiale dell'oggetto: la componente diffusiva, modellata come colore RGB, e la componente speculare, approssimata mediante un modello parametrico funzione della posizione dell'osservatore. L'apparenza superficiale ricostruita permette una visualizzazione fotorealistica e in real-time dell'oggetto al variare della posizione dell'osservatore, consentendo una navigazione 3D interattiva

    Neural Relightable Participating Media Rendering

    Get PDF
    Learning neural radiance fields of a scene has recently allowed realistic novel view synthesis of the scene, but they are limited to synthesize images under the original fixed lighting condition. Therefore, they are not flexible for the eagerly desired tasks like relighting, scene editing and scene composition. To tackle this problem, several recent methods propose to disentangle reflectance and illumination from the radiance field. These methods can cope with solid objects with opaque surfaces but participating media are neglected. Also, they take into account only direct illumination or at most one-bounce indirect illumination, thus suffer from energy loss due to ignoring the high-order indirect illumination. We propose to learn neural representations for participating media with a complete simulation of global illumination. We estimate direct illumination via ray tracing and compute indirect illumination with spherical harmonics. Our approach avoids computing the lengthy indirect bounces and does not suffer from energy loss. Our experiments on multiple scenes show that our approach achieves superior visual quality and numerical performance compared to state-of-the-art methods, and it can generalize to deal with solid objects with opaque surfaces as well.Comment: Accepted to NeurIPS 202

    Relightable Neural Human Assets from Multi-view Gradient Illuminations

    Full text link
    Human modeling and relighting are two fundamental problems in computer vision and graphics, where high-quality datasets can largely facilitate related research. However, most existing human datasets only provide multi-view human images captured under the same illumination. Although valuable for modeling tasks, they are not readily used in relighting problems. To promote research in both fields, in this paper, we present UltraStage, a new 3D human dataset that contains more than 2,000 high-quality human assets captured under both multi-view and multi-illumination settings. Specifically, for each example, we provide 32 surrounding views illuminated with one white light and two gradient illuminations. In addition to regular multi-view images, gradient illuminations help recover detailed surface normal and spatially-varying material maps, enabling various relighting applications. Inspired by recent advances in neural representation, we further interpret each example into a neural human asset which allows novel view synthesis under arbitrary lighting conditions. We show our neural human assets can achieve extremely high capture performance and are capable of representing fine details such as facial wrinkles and cloth folds. We also validate UltraStage in single image relighting tasks, training neural networks with virtual relighted data from neural assets and demonstrating realistic rendering improvements over prior arts. UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks. The dataset is available at https://miaoing.github.io/RNHA.Comment: Project page: https://miaoing.github.io/RNH

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin

    A portable capturing system for image-based relighting.

    Get PDF
    Pang Wai Man.Thesis submitted in: July 2002.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 108-114).Abstracts in English and Chinese.Abstract --- p.iiAcknowledgments --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Image-based Rendering and Modeling --- p.1Chapter 1.1.1 --- Image-based versus Geometry-based --- p.5Chapter 1.2 --- Capturing for Graphics --- p.6Chapter 1.3 --- Organization of this Thesis --- p.8Chapter 2 --- Image-based Rendering and Relighting --- p.10Chapter 2.1 --- Theoretical Concepts --- p.11Chapter 2.1.1 --- Plenoptic Illumination Function --- p.11Chapter 2.1.2 --- Apparent BRDF --- p.13Chapter 2.1.3 --- Types of lighting --- p.14Chapter 2.1.4 --- Image superposition --- p.16Chapter 2.2 --- General Rendering Pipeline --- p.18Chapter 2.3 --- Rendering Techniques --- p.21Chapter 2.3.1 --- Nearest Neighbours and Interpolation --- p.21Chapter 2.3.2 --- Image Warping --- p.23Chapter 2.4 --- IBR Representations and applications --- p.26Chapter 2.4.1 --- Navigation --- p.28Chapter 2.4.2 --- Relighting Representations --- p.35Chapter 2.4.3 --- High Dynamic Range Imaging --- p.38Chapter 2.5 --- Chapter Summary --- p.42Chapter 3 --- Capturing Methods --- p.44Chapter 3.1 --- Spatial Tracking Approaches --- p.45Chapter 3.1.1 --- Mechanical based Method --- p.46Chapter 3.1.2 --- Electromagnetic based Method --- p.48Chapter 3.1.3 --- Vision based Method --- p.50Chapter 3.1.4 --- Comparison --- p.51Chapter 3.2 --- High Dynamic Range Imaging --- p.53Chapter 3.2.1 --- Successive Exposure Capturing --- p.53Chapter 3.2.2 --- Spatial Varing Filter --- p.53Chapter 3.2.3 --- Special Designed Hardware --- p.55Chapter 3.3 --- Chapter Summary --- p.56Chapter 4 --- System Design and Implementation --- p.58Chapter 4.1 --- System Overview --- p.58Chapter 4.2 --- The Setup --- p.60Chapter 4.3 --- Capturing Procedures --- p.61Chapter 4.3.1 --- Calibrations --- p.61Chapter 4.4 --- Vision based tracking --- p.64Chapter 4.4.1 --- The pin-hole camera model --- p.65Chapter 4.4.2 --- Basics of Camera Calibration --- p.66Chapter 4.5 --- Light Vector Tracking --- p.70Chapter 4.5.1 --- The Transformations --- p.70Chapter 4.5.2 --- Tracking Accuracy --- p.71Chapter 4.5.3 --- Tracking Range Enlargement --- p.72Chapter 4.6 --- Capturing Experiment --- p.74Chapter 4.7 --- Sampling Analysis --- p.74Chapter 4.8 --- Chapter Summary --- p.78Chapter 5 --- Data Postprocessing --- p.80Chapter 5.1 --- Scattered Data Fitting --- p.81Chapter 5.1.1 --- Spherical Delaunay Triangulation --- p.83Chapter 5.1.2 --- Interpolation on Sphere --- p.86Chapter 5.2 --- Compression --- p.88Chapter 5.3 --- Chapter Summary --- p.90Chapter 6 --- Relit Results --- p.91Chapter 6.1 --- Relighting with Multiple Directional Lights --- p.92Chapter 6.2 --- Relighting with Environmental Maps --- p.94Chapter 7 --- Conclusion --- p.101Chapter 7.1 --- Future Research Aspect --- p.102Chapter A --- System User Guide --- p.104Chapter A.1 --- Equipment Configuration --- p.104Chapter A.2 --- Operation Guide --- p.105Chapter A.3 --- Software Components --- p.106Chapter A.3.1 --- Image capturing - lightcap --- p.106Chapter A.3.2 --- Raw Frame Extraction ´ؤ lfprocess --- p.107Chapter A.3.3 --- Resampling and Compression - svscatterppm2urdf . --- p.107Bibliography --- p.10
    corecore