34 research outputs found

    Attribute-preserving gamut mapping of measured BRDFs

    Get PDF
    Reproducing the appearance of real-world materials using current printing technology is problematic. The reduced number of inks available define the printer's limited gamut, creating distortions in the printed appearance that are hard to control. Gamut mapping refers to the process of bringing an out-of-gamut material appearance into the printer's gamut, while minimizing such distortions as much as possible. We present a novel two-step gamut mapping algorithm that allows users to specify which perceptual attribute of the original material they want to preserve (such as brightness, or roughness). In the first step, we work in the low-dimensional intuitive appearance space recently proposed by Serrano et al. [SGM*16], and adjust achromatic reflectance via an objective function that strives to preserve certain attributes. From such intermediate representation, we then perform an image-based optimization including color information, to bring the BRDF into gamut. We show, both objectively and through a user study, how our method yields superior results compared to the state of the art, with the additional advantage that the user can specify which visual attributes need to be preserved. Moreover, we show how this approach can also be used for attribute-preserving material editing

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    Pushing the Limits of 3D Color Printing: Error Diffusion with Translucent Materials

    Full text link
    Accurate color reproduction is important in many applications of 3D printing, from design prototypes to 3D color copies or portraits. Although full color is available via other technologies, multi-jet printers have greater potential for graphical 3D printing, in terms of reproducing complex appearance properties. However, to date these printers cannot produce full color, and doing so poses substantial technical challenges, from the shear amount of data to the translucency of the available color materials. In this paper, we propose an error diffusion halftoning approach to achieve full color with multi-jet printers, which operates on multiple isosurfaces or layers within the object. We propose a novel traversal algorithm for voxel surfaces, which allows the transfer of existing error diffusion algorithms from 2D printing. The resulting prints faithfully reproduce colors, color gradients and fine-scale details.Comment: 15 pages, 14 figures; includes supplemental figure

    The effect of motion on the perception of material appearance

    Get PDF
    We analyze the effect of motion in the perception of material appearance. First, we create a set of stimuli containing 72 realistic materials, rendered with varying degrees of linear motion blur. Then we launch a large-scale study on Mechanical Turk to rate a given set of perceptual attributes, such as brightness, roughness, or the perceived strength of reflections. Our statistical analysis shows that certain attributes undergo a significant change, varying appearance perception under motion. In addition, we further investigate the perception of brightness, for the particular cases of rubber and plastic materials. We create new stimuli, with ten different luminance levels and seven motion degrees. We launch a new user study to retrieve their perceived brightness. From the users'' judgements, we build two-dimensional maps showing how perceived brightness varies as a function of the luminance and motion of the material

    A Low-Dimensional Perceptual Space for Intuitive BRDF Editing

    Get PDF
    International audienceUnderstanding and characterizing material appearance based on human perception is challenging because of the highdimensionality and nonlinearity of reflectance data. We refer to the process of identifying specific characteristics of material appearance within the same category as material estimation, in contrast to material categorization which focuses on identifying inter-category differences [FNG15]. In this paper, we present a method to simulate the material estimation process based on human perception. We create a continuous perceptual space for measured tabulated data based on its underlying low-dimensional manifold. Unlike many previous works that only address individual perceptual attributes (such as gloss), we focus on extracting all possible dimensions that can explain the perceived differences between appearances. Additionally, we propose a new material editing interface that combines image navigation and sliders to visualize each perceptual dimension and facilitate the editing of tabulated BRDFs. We conduct a user study to evaluate the efficacy of the perceptual space and the interface in terms of appearance matching

    Three perceptual dimensions for specular and diffuse reflection

    Get PDF
    Previous research investigated the perceptual dimensionality of achromatic reflection of opaque surfaces, by using either simple analytic models of reflection, or measured reflection properties of a limited sample of materials. Here we aim to extend this work to a broader range of simulated materials. In a first experiment, we used sparse multidimensional scaling techniques to represent a set of rendered stimuli in a perceptual space that is consistent with participants’ similarity judgments.Participants were presented with one reference object and four comparisons, rendered with different material properties.They were asked to rank the comparisons according to their similarity to the reference, resulting in an efficient collection of a large number of similarity judgments. In order to interpret the space individuated by multidimensional scaling, we ran a second experiment in which observers were asked to rate our experimental stimuli according to a list of 30 adjectives referring to their surface reflectance properties. Our results suggest that perception of achromatic reflection is based on at least three dimensions, which we labelled “Lightness”, “Gloss” and “Metallicity”, in accordance with the rating results. These dimensions are characterized by a relatively simple relationship with the parameters of the physically based rendering model used to generate our stimuli, indicating that they correspond to different physical properties of the rendered materials. Specifically,“Lightness” relates to diffuse reflections, “Gloss” to the presence of high contrast sharp specular highlights and “Metallicity” to spread out specular reflections

    Leaming Visual Appearance: Perception, Modeling and Editing.

    Get PDF
    La apariencia visual determina como entendemos un objecto o imagen, y, por tanto, es un aspecto fundamental en la creación de contenido digital. Es un término general, englobando otros como la apariencia de los materiales, definida como la impresión que tenemos de un material, y la cual supone una interacción física entre luz y materia, y como nuestro sistema visual es capaz de percibirla. Sin embargo, modelar computacionalmente el comportamiento de nuestro sistema visual es una tarea difícil, entre otros motivos porque no existe una teoría definitiva y unificada sobre la percepción visual humana. Además, aunque hemos desarrollado algoritmos capaces de modelar fehacientemente la interacción entre luz y materia, existe una desconexión entre los parámetros físicos que usan estos algoritmos, y los parámetros perceptuales que el sistema visual humano entiende. Esto hace que manipular estas representaciones físicas, y sus interacciones, sea una tarea tediosa y costosa, incluso para usuarios expertos. Esta tesis busca mejorar nuestra comprensión de la percepción de la apariencia de materiales y usar dicho conocimiento para mejorar los algoritmos existentes para la generación de contenido visual. Específicamente, la tesis tiene contribuciones en tres áreas: proponiendo nuevos modelos computacionales para medir la similitud de apariencia; investigando la interacción entre iluminación y geometría; y desarrollando aplicaciones intuitivas para la manipulación de apariencia, en concreto, para el re-iluminado de humanos y para editar la apariencia de materiales.Una primera parte de la tesis explora métodos para medir la similaridad de apariencia. Ser capaces de medir cómo de similares son dos materiales, o imágenes, es un problema clásico en campos de la computación visual como visión por computador o informática gráfica. Abordamos primero el problema de similaridad en la apariencia de materiales. Proponemos un método basado en deep learning que combina imágenes con juicios subjetivos sobre la similitud de materiales, recogidos mediante estudios de usuario. Por otro lado, se explora el problema de la similaridad entre iconos. En este segundo caso, se hace uso de redes neuronales siamesas, y el estilo y la identidad que dan los artistas juega un papel clave en dicha medida de similaridad. La segunda parte avanza en la comprensión de cómo los factores de confusión (confounding factors) afectan a nuestra percepción de la apariencia de los materiales. Dos factores de confusión claves son la geometría de los objetos y la iluminación de la escena. Comenzamos investigando el efecto de dichos factores a la hora de reconocer los materiales a través de diversos experimentos y estudios estadísticos. También investigamos el efecto del movimiento del objeto en la percepción de la apariencia de materiales.En la tercera parte exploramos aplicaciones intuitivas para la manipulación de la apariencia visual. Primero, abordamos el problema de la re-iluminación de humanos. Proponemos una nueva formulación del problema, y basándonos en ella, se diseña y entrena un modelo basado en redes neuronales profundas para re-iluminar una escena. Por último, abordamos el problema de la edición intuitiva de materiales. Para ello, recopilamos juicios humanos sobre la percepción de diferentes atributos y presentamos un modelo, basado en redes neuronales profundas, capaz de editar materiales de forma realista simplemente variando el valor de los atributos recogidos.<br /

    Measurement and rendering of complex non-diffuse and goniochromatic packaging materials

    Get PDF
    Realistic renderings of materials with complex optical properties, such as goniochromatism and non-diffuse reflection, are difficult to achieve. In the context of the print and packaging industries, accurate visualisation of the complex appearance of such materials is a challenge, both for communication and quality control. In this paper, we characterise the bidirectional reflectance of two homogeneous print samples displaying complex optical properties. We demonstrate that in-plane retro-reflective measurements from a single input photograph, along with genetic algorithm-based BRDF fitting, allow to estimate an optimal set of parameters for reflectance models, to use for rendering. While such a minimal set of measurements enables visually satisfactory renderings of the measured materials, we show that a few additional photographs lead to more accurate results, in particular, for samples with goniochromatic appearance

    Perceptual quality of BRDF approximations: dataset and metrics

    Get PDF
    International audienceBidirectional Reflectance Distribution Functions (BRDFs) are pivotal to the perceived realism in image synthesis. While measured BRDF datasets are available, reflectance functions are most of the time approximated by analytical formulas for storage efficiency reasons. These approximations are often obtained by minimizing metrics such as L 2 —or weighted quadratic—distances, but these metrics do not usually correlate well with perceptual quality when the BRDF is used in a rendering context, which motivates a perceptual study. The contributions of this paper are threefold. First, we perform a large-scale user study to assess the perceptual quality of 2026 BRDF approximations, resulting in 84138 judgments across 1005 unique participants. We explore this dataset and analyze perceptual scores based on material type and illumination. Second, we assess nine analytical BRDF models in their ability to approximate tabulated BRDFs. Third, we assess several image-based and BRDF-based (Lp, optimal transport and kernel distance) metrics in their ability to approximate perceptual similarity judgments

    Printing Beyond Color: Spectral and Specular Reproduction

    Get PDF
    For accurate printing (reproduction), two important appearance attributes to consider are color and gloss. These attributes are related to two topics focused on in this dissertation: spectral reproduction and specular (gloss) printing. In the conventional printing workflow known as the metameric printing workflow, which we use mostly nowadays, high-quality prints -- in terms of colorimetric accuracy -- can be achieved only under a predefined illuminant (i.e. an illuminant that the printing pipeline is adjusted to; e.g. daylight). While this printing workflow is useful and sufficient for many everyday purposes, in some special cases, such as artwork (e.g. painting) reproduction, security printing, accurate industrial color communication and so on, in which accurate reproduction of an original image under a variety of illumination conditions (e.g. daylight, tungsten light, museum light, etc.) is required, metameric reproduction may produce satisfactory results only with luck. Therefore, in these cases, another printing workflow, known as spectral printing pipeline must be used, with the ideal aim of illuminant-invariant match between the original image and the reproduction. In this workflow, the reproduction of spectral raw data (i.e. reflectances in the visible wavelength range), rather than reproduction of colorimetric values (colors) alone (under a predefined illuminant) is taken into account. Due to the limitations of printing systems extant, the reproduction of all reflectances is not possible even with multi-channel (multi-colorant) printers. Therefore, practical strategies are required in order to map non-reproducible reflectances into reproducible spectra and to choose appropriate combinations of printer colorants for the reproduction of the mapped reflectances. For this purpose, an approach called Spatio-Spectral Gamut Mapping and Separation, SSGMS, was proposed, which results in almost artifact-free spectral reproduction under a set of various illuminants. The quality control stage is usually the last stage in any printing pipeline. Nowadays, the quality of the printout is usually controlled only in terms of colorimetric accuracy and common printing artifacts. However, some gloss-related artifacts, such as gloss-differential (inconsistent gloss appearance across an image, caused mostly by variations in deposited ink area coverage on different spots), are ignored, because no strategy to avoid them exists. In order to avoid such gloss-related artifacts and to control the glossiness of the printout locally, three printing strategies were proposed. In general, for perceptually accurate reproduction of color and gloss appearance attributes, understanding the relationship between measured values and perceived magnitudes of these attributes is essential. There has been much research into reproduction of colors within perceptually meaningful color spaces, but little research from the gloss perspective has been carried out. Most of these studies are based on simulated display-based images (mostly with neutral colors) and do not take real objects into account. In this dissertation, three psychophysical experiments were conducted in order to investigate the relationship between measured gloss values (objective quantities) and perceived gloss magnitudes (subjective quantities) using real colored samples printed by the aforementioned proposed printing strategies. These experiments revealed that the relationship mentioned can be explained by a Power function according to Stevens' Power Law, considering almost the entire gloss range. Another psychophysical experiment was also conducted in order to investigate the interrelation between perceived surface gloss and texture, using 2.5D samples printed in two different texture types and with various gloss levels and texture elevations. According to the results of this experiment, different macroscopic texture types and levels (in terms of texture elevation) were found to influence the perceived surface gloss level slightly. No noticeable influence of surface gloss on the perceived texture level was observed, indicating texture constancy regardless of the gloss level printed. The SSGMS approach proposed for the spectral reproduction, the three printing strategies presented for gloss printing, and the results of the psychophysical experiments conducted on gloss printing and appearance can be used to improve the overall print quality in terms of color and gloss reproduction
    corecore