397 research outputs found

    EverLight: Indoor-Outdoor Editable HDR Lighting Estimation

    Full text link
    Because of the diversity in lighting environments, existing illumination estimation techniques have been designed explicitly on indoor or outdoor environments. Methods have focused specifically on capturing accurate energy (e.g., through parametric lighting models), which emphasizes shading and strong cast shadows; or producing plausible texture (e.g., with GANs), which prioritizes plausible reflections. Approaches which provide editable lighting capabilities have been proposed, but these tend to be with simplified lighting models, offering limited realism. In this work, we propose to bridge the gap between these recent trends in the literature, and propose a method which combines a parametric light model with 360{\deg} panoramas, ready to use as HDRI in rendering engines. We leverage recent advances in GAN-based LDR panorama extrapolation from a regular image, which we extend to HDR using parametric spherical gaussians. To achieve this, we introduce a novel lighting co-modulation method that injects lighting-related features throughout the generator, tightly coupling the original or edited scene illumination within the panorama generation process. In our representation, users can easily edit light direction, intensity, number, etc. to impact shading while providing rich, complex reflections while seamlessly blending with the edits. Furthermore, our method encompasses indoor and outdoor environments, demonstrating state-of-the-art results even when compared to domain-specific methods.Comment: 11 pages, 7 figure

    Free-viewpoint Indoor Neural Relighting from Multi-view Stereo

    Get PDF
    We introduce a neural relighting algorithm for captured indoors scenes, that allows interactive free-viewpoint navigation. Our method allows illumination to be changed synthetically, while coherently rendering cast shadows and complex glossy materials. We start with multiple images of the scene and a 3D mesh obtained by multi-view stereo (MVS) reconstruction. We assume that lighting is well-explained as the sum of a view-independent diffuse component and a view-dependent glossy term concentrated around the mirror reflection direction. We design a convolutional network around input feature maps that facilitate learning of an implicit representation of scene materials and illumination, enabling both relighting and free-viewpoint navigation. We generate these input maps by exploiting the best elements of both image-based and physically-based rendering. We sample the input views to estimate diffuse scene irradiance, and compute the new illumination caused by user-specified light sources using path tracing. To facilitate the network's understanding of materials and synthesize plausible glossy reflections, we reproject the views and compute mirror images. We train the network on a synthetic dataset where each scene is also reconstructed with MVS. We show results of our algorithm relighting real indoor scenes and performing free-viewpoint navigation with complex and realistic glossy reflections, which so far remained out of reach for view-synthesis techniques

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    A generative framework for image-based editing of material appearance using perceptual attributes

    Get PDF
    Single-image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image-based editing method that allows to modify the material appearance of an object by increasing or decreasing high-level perceptual attributes, using a single image as input. Our framework relies on a two-step generative network, where the first step drives the change in appearance and the second produces an image with high-frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high-level attributes, collected through crowd-sourced experiments, and build upon training strategies that circumvent the cumbersome need for original-edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes (Glossy and Metallic), and validate the perception of appearance in our edited images through a user study

    Leaming Visual Appearance: Perception, Modeling and Editing.

    Get PDF
    La apariencia visual determina como entendemos un objecto o imagen, y, por tanto, es un aspecto fundamental en la creación de contenido digital. Es un término general, englobando otros como la apariencia de los materiales, definida como la impresión que tenemos de un material, y la cual supone una interacción física entre luz y materia, y como nuestro sistema visual es capaz de percibirla. Sin embargo, modelar computacionalmente el comportamiento de nuestro sistema visual es una tarea difícil, entre otros motivos porque no existe una teoría definitiva y unificada sobre la percepción visual humana. Además, aunque hemos desarrollado algoritmos capaces de modelar fehacientemente la interacción entre luz y materia, existe una desconexión entre los parámetros físicos que usan estos algoritmos, y los parámetros perceptuales que el sistema visual humano entiende. Esto hace que manipular estas representaciones físicas, y sus interacciones, sea una tarea tediosa y costosa, incluso para usuarios expertos. Esta tesis busca mejorar nuestra comprensión de la percepción de la apariencia de materiales y usar dicho conocimiento para mejorar los algoritmos existentes para la generación de contenido visual. Específicamente, la tesis tiene contribuciones en tres áreas: proponiendo nuevos modelos computacionales para medir la similitud de apariencia; investigando la interacción entre iluminación y geometría; y desarrollando aplicaciones intuitivas para la manipulación de apariencia, en concreto, para el re-iluminado de humanos y para editar la apariencia de materiales.Una primera parte de la tesis explora métodos para medir la similaridad de apariencia. Ser capaces de medir cómo de similares son dos materiales, o imágenes, es un problema clásico en campos de la computación visual como visión por computador o informática gráfica. Abordamos primero el problema de similaridad en la apariencia de materiales. Proponemos un método basado en deep learning que combina imágenes con juicios subjetivos sobre la similitud de materiales, recogidos mediante estudios de usuario. Por otro lado, se explora el problema de la similaridad entre iconos. En este segundo caso, se hace uso de redes neuronales siamesas, y el estilo y la identidad que dan los artistas juega un papel clave en dicha medida de similaridad. La segunda parte avanza en la comprensión de cómo los factores de confusión (confounding factors) afectan a nuestra percepción de la apariencia de los materiales. Dos factores de confusión claves son la geometría de los objetos y la iluminación de la escena. Comenzamos investigando el efecto de dichos factores a la hora de reconocer los materiales a través de diversos experimentos y estudios estadísticos. También investigamos el efecto del movimiento del objeto en la percepción de la apariencia de materiales.En la tercera parte exploramos aplicaciones intuitivas para la manipulación de la apariencia visual. Primero, abordamos el problema de la re-iluminación de humanos. Proponemos una nueva formulación del problema, y basándonos en ella, se diseña y entrena un modelo basado en redes neuronales profundas para re-iluminar una escena. Por último, abordamos el problema de la edición intuitiva de materiales. Para ello, recopilamos juicios humanos sobre la percepción de diferentes atributos y presentamos un modelo, basado en redes neuronales profundas, capaz de editar materiales de forma realista simplemente variando el valor de los atributos recogidos.<br /

    On object recognition for industrial augmented reality

    Get PDF
    Some reasons are market pressure, an increase of functionality, and adaptability to an already complex environment, among others. Therefore, workers face fast-changing and challenging tasks along with all the product lifecycle that reach the human cognitive limits. Although nowadays some operations are automated, many of them still need to be carried out by humans because of their complexity. In addition to management strategies and design for X, Industrial Augmented Reality (IAR) has proven to potentially benefit activities such as maintenance, assembly, manufacturing, and repair, among others. It is also supposed to upgrade the manufacturing processes by improving it, simplifying decision-making activities, reducing time and user movements, diminishing errors, and decreasing mental and physical effort. Nevertheless, IAR has not succeeded in breaking out of the laboratories and establishing itself as a strong solution in the industry, mainly because technical and interaction components are far from ideal. Its advance is limited by its enabling technologies. One of its biggest challenges are the methods for understanding the surroundings considering the different domain variables that affect IAR implementations. Thus, inspired by some systematical methodologies proposing that, for any problemsolving activity, it is required to define the characteristics that constrain the problem and the needs to be satisfied, a general frame of IAR was proposed through the identification of Domain Variables (DV), that are relevant characteristics of the industrial process in the previous Augmented Reality (AR) applications. These DV regard the user, parts, environment, and task that have an impact on the technical implementation and user performance and perception (Chapter 2). Subsequently, a detailed analysis of the influence of the DV on technical implementations related to the processes intended to understand the surroundings was performed. The results of this analysis suggest that the DV influence the technical process in two ways. The first one is that they define the boundaries in the characteristics of the technology, and the second one is that they cause some issues in the process of understanding the surroundings (Chapter 3). Further, an automatic method for creating synthetic datasets using solely the 3D model of the parts was proposed. It is hypothesized that the proposed variables are the main source of visual variations of an object in this context. Thus, the proposed method is derived from physically recreated light-matter interactions of this relevant variables. This method is aimed to create fully labeled datasets for training and testing surrounding understanding algorithms (Chapter 4). Finally, the proposed method is evaluated in a study case of object classification of two cases: a particular industrial case, and a general classification problem (using classes of ImageNet). Results suggest that fine-tuning models with the proposed method reach comparable performance (no statistical difference) than models trained with photos. These results validate the proposed method as a viable alternative for training surrounding understanding algorithms applied to industrial cases (Chapter 5)

    Deep Appearance Maps

    Get PDF
    We propose a deep representation of appearance, i. e., the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have useddeep learning to extract classic appearance representationsrelating to reflectance model parameters (e. g., Phong) orillumination (e. g., HDR environment maps). We suggest todirectly represent appearance itself as a network we call aDeep Appearance Map (DAM). This is a 4D generalizationover 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showingmultiple materials to multiple deep appearance maps
    corecore