96 research outputs found

    Guided Robust Matte-Model Fitting for Accelerating Multi-light Reflectance

    Get PDF
    The generation of a basic matte model is at the core of many multi-light reflectance processing approaches, such as Photometric Stereo or Reflectance Transformation Imag- ing. To recover information on objects\u2019 shape and appearance, the matte model is used directly or combined with specialized methods for modeling high-frequency behaviors. Multivariate robust regression offers a general solution to reliably extract the matte com- ponent when source data is heavily contaminated by shadows, inter-reflections, specular- ity, or noise. However, robust multivariate modeling is usually very slow. In this paper, we accelerate robust fitting by drastically reducing the number of tested candidate solu- tions using a guided approach. Our method propagates already known solutions to nearby pixels using a similarity-driven flood-fill strategy, and exploits this knowledge to order possible candidate solutions and to determine convergence conditions. The method has been tested on objects with a variety of reflectance behaviors, showing state-of-the-art accuracy with respect to current solutions, and a significant speed-up without accuracy reduction with respect to multivariate robust regression

    Multispectral RTI Analysis of Heterogeneous Artworks

    Get PDF
    We propose a novel multi-spectral reflectance transformation imaging (MS-RTI) framework for the acquisition and direct analysis of the reflectance behavior of heterogeneous artworks. Starting from free-form acquisitions, we compute per-pixel calibrated multi-spectral appearance profiles, which associate a reflectance value to each sampled light direction and frequency. Visualization, relighting, and feature extraction is performed directly on appearance profile data, applying scattered data interpolation based on Radial Basis Functions to estimate per-pixel reflectance from novel lighting directions. We demonstrate how the proposed solution can convey more insights on the object materials and geometric details compared to classical multi-light methods that rely on low-frequency analytical model fitting eventually mixed with a separate handling of high-frequency components, hence requiring constraining priors on material behavior. The flexibility of our approach is illustrated on two heterogeneous case studies, a painting and a dark shiny metallic sculpture, that showcase feature extraction, visualization, and analysis of high-frequency properties of artworks using multi-light, multi-spectral (Visible, UV and IR) acquisitions.Terms: "European Union (EU)" & "Horizon 2020" / Action: H2020-EU.3.6.3. - Reflective societies - cultural heritage and European identity / Acronym: Scan4Reco / Grant number: 665091the DSURF (PRIN 2015) project funded by the Italian Ministry of University and ResearchSardinian Regional Authorities under projects VIGEC and Vis&VideoLa

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    Automatic Reconstruction of Textured 3D Models

    Get PDF
    Three dimensional modeling and visualization of environments is an increasingly important problem. This work addresses the problem of automatic 3D reconstruction and we present a system for unsupervised reconstruction of textured 3D models in the context of modeling indoor environments. We present solutions to all aspects of the modeling process and an integrated system for the automatic creation of large scale 3D models

    Leaming Visual Appearance: Perception, Modeling and Editing.

    Get PDF
    La apariencia visual determina como entendemos un objecto o imagen, y, por tanto, es un aspecto fundamental en la creaciĂłn de contenido digital. Es un tĂ©rmino general, englobando otros como la apariencia de los materiales, definida como la impresiĂłn que tenemos de un material, y la cual supone una interacciĂłn fĂ­sica entre luz y materia, y como nuestro sistema visual es capaz de percibirla. Sin embargo, modelar computacionalmente el comportamiento de nuestro sistema visual es una tarea difĂ­cil, entre otros motivos porque no existe una teorĂ­a definitiva y unificada sobre la percepciĂłn visual humana. AdemĂĄs, aunque hemos desarrollado algoritmos capaces de modelar fehacientemente la interacciĂłn entre luz y materia, existe una desconexiĂłn entre los parĂĄmetros fĂ­sicos que usan estos algoritmos, y los parĂĄmetros perceptuales que el sistema visual humano entiende. Esto hace que manipular estas representaciones fĂ­sicas, y sus interacciones, sea una tarea tediosa y costosa, incluso para usuarios expertos. Esta tesis busca mejorar nuestra comprensiĂłn de la percepciĂłn de la apariencia de materiales y usar dicho conocimiento para mejorar los algoritmos existentes para la generaciĂłn de contenido visual. EspecĂ­ficamente, la tesis tiene contribuciones en tres ĂĄreas: proponiendo nuevos modelos computacionales para medir la similitud de apariencia; investigando la interacciĂłn entre iluminaciĂłn y geometrĂ­a; y desarrollando aplicaciones intuitivas para la manipulaciĂłn de apariencia, en concreto, para el re-iluminado de humanos y para editar la apariencia de materiales.Una primera parte de la tesis explora mĂ©todos para medir la similaridad de apariencia. Ser capaces de medir cĂłmo de similares son dos materiales, o imĂĄgenes, es un problema clĂĄsico en campos de la computaciĂłn visual como visiĂłn por computador o informĂĄtica grĂĄfica. Abordamos primero el problema de similaridad en la apariencia de materiales. Proponemos un mĂ©todo basado en deep learning que combina imĂĄgenes con juicios subjetivos sobre la similitud de materiales, recogidos mediante estudios de usuario. Por otro lado, se explora el problema de la similaridad entre iconos. En este segundo caso, se hace uso de redes neuronales siamesas, y el estilo y la identidad que dan los artistas juega un papel clave en dicha medida de similaridad. La segunda parte avanza en la comprensiĂłn de cĂłmo los factores de confusiĂłn (confounding factors) afectan a nuestra percepciĂłn de la apariencia de los materiales. Dos factores de confusiĂłn claves son la geometrĂ­a de los objetos y la iluminaciĂłn de la escena. Comenzamos investigando el efecto de dichos factores a la hora de reconocer los materiales a travĂ©s de diversos experimentos y estudios estadĂ­sticos. TambiĂ©n investigamos el efecto del movimiento del objeto en la percepciĂłn de la apariencia de materiales.En la tercera parte exploramos aplicaciones intuitivas para la manipulaciĂłn de la apariencia visual. Primero, abordamos el problema de la re-iluminaciĂłn de humanos. Proponemos una nueva formulaciĂłn del problema, y basĂĄndonos en ella, se diseña y entrena un modelo basado en redes neuronales profundas para re-iluminar una escena. Por Ășltimo, abordamos el problema de la ediciĂłn intuitiva de materiales. Para ello, recopilamos juicios humanos sobre la percepciĂłn de diferentes atributos y presentamos un modelo, basado en redes neuronales profundas, capaz de editar materiales de forma realista simplemente variando el valor de los atributos recogidos.<br /

    Intuitive and Accurate Material Appearance Design and Editing

    Get PDF
    Creating and editing high-quality materials for photorealistic rendering can be a difficult task due to the diversity and complexity of material appearance. Material design is the process by which artists specify the reflectance properties of a surface, such as its diffuse color and specular roughness. Even with the support of commercial software packages, material design can be a time-consuming trial-and-error task due to the counter-intuitive nature of the complex reflectance models. Moreover, many material design tasks require the physical realization of virtually designed materials as the final step, which makes the process even more challenging due to rendering artifacts and the limitations of fabrication. In this dissertation, we propose a series of studies and novel techniques to improve the intuitiveness and accuracy of material design and editing. Our goal is to understand how humans visually perceive materials, simplify user interaction in the design process and, and improve the accuracy of the physical fabrication of designs. Our first work focuses on understanding the perceptual dimensions for measured material data. We build a perceptual space based on a low-dimensional reflectance manifold that is computed from crowd-sourced data using a multi-dimensional scaling model. Our analysis shows the proposed perceptual space is consistent with the physical interpretation of the measured data. We also put forward a new material editing interface that takes advantage of the proposed perceptual space. We visualize each dimension of the manifold to help users understand how it changes the material appearance. Our second work investigates the relationship between translucency and glossiness in material perception. We conduct two human subject studies to test if subsurface scattering impacts gloss perception and examine how the shape of an object influences this perception. Based on our results, we discuss why it is necessary to include transparent and translucent media for future research in gloss perception and material design. Our third work addresses user interaction in the material design system. We present a novel Augmented Reality (AR) material design prototype, which allows users to visualize their designs against a real environment and lighting. We believe introducing AR technology can make the design process more intuitive and improve the authenticity of the results for both novice and experienced users. To test this assumption, we conduct a user study to compare our prototype with the traditional material design system with gray-scale background and synthetic lighting. The results demonstrate that with the help of AR techniques, users perform better in terms of objectively measured accuracy and time and they are subjectively more satisfied with their results. Finally, our last work turns to the challenge presented by the physical realization of designed materials. We propose a learning-based solution to map the virtually designed appearance to a meso-scale geometry that can be easily fabricated. Essentially, this is a fitting problem, but compared with previous solutions, our method can provide the fabrication recipe with higher reconstruction accuracy for a large fitting gamut. We demonstrate the efficacy of our solution by comparing the reconstructions with existing solutions and comparing fabrication results with the original design. We also provide an application of bi-scale material editing using the proposed method

    Towards Predictive Rendering in Virtual Reality

    Get PDF
    The strive for generating predictive images, i.e., images representing radiometrically correct renditions of reality, has been a longstanding problem in computer graphics. The exactness of such images is extremely important for Virtual Reality applications like Virtual Prototyping, where users need to make decisions impacting large investments based on the simulated images. Unfortunately, generation of predictive imagery is still an unsolved problem due to manifold reasons, especially if real-time restrictions apply. First, existing scenes used for rendering are not modeled accurately enough to create predictive images. Second, even with huge computational efforts existing rendering algorithms are not able to produce radiometrically correct images. Third, current display devices need to convert rendered images into some low-dimensional color space, which prohibits display of radiometrically correct images. Overcoming these limitations is the focus of current state-of-the-art research. This thesis also contributes to this task. First, it briefly introduces the necessary background and identifies the steps required for real-time predictive image generation. Then, existing techniques targeting these steps are presented and their limitations are pointed out. To solve some of the remaining problems, novel techniques are proposed. They cover various steps in the predictive image generation process, ranging from accurate scene modeling over efficient data representation to high-quality, real-time rendering. A special focus of this thesis lays on real-time generation of predictive images using bidirectional texture functions (BTFs), i.e., very accurate representations for spatially varying surface materials. The techniques proposed by this thesis enable efficient handling of BTFs by compressing the huge amount of data contained in this material representation, applying them to geometric surfaces using texture and BTF synthesis techniques, and rendering BTF covered objects in real-time. Further approaches proposed in this thesis target inclusion of real-time global illumination effects or more efficient rendering using novel level-of-detail representations for geometric objects. Finally, this thesis assesses the rendering quality achievable with BTF materials, indicating a significant increase in realism but also confirming the remainder of problems to be solved to achieve truly predictive image generation

    Ray Tracing Methods for Point Cloud Rendering

    Get PDF
    State of the art scanning and capturing devices are able to produce surface point cloud models of a wide range of real world objects. The visualization and rendering of enormous point clouds with millions or billions of points is demanding. VR- and AR-applications can utilize embedded real world objects in generating visually pleasing and immersive virtual worlds. In order to achieve convincing real life equivalents in VR, rendering techniques that can replicate realistic material and lighting effects are needed. This can be achieved by utilizing ray tracing methods to render the virtual world onto a monitor or a head-mounted display. Virtual reality applications need real-time stereoscopic rendering with high frame rates and resolution to produce a realistic and comfortable experience. This sets high demands on a point cloud ray tracing pipeline, which needs efficient intersection testing between rays and point cloud models. An easily intersectable global surface can be reconstructed from the point cloud model with, e.g., triangle mesh reconstruction. However, this can be computationally demanding and even wasteful if parts of the model are out of view or occluded. Direct point cloud ray tracing methods consider local features of the point cloud to generate intersectable surfaces only when needed. In this thesis, we survey and compare different methods for directly ray tracing point cloud models without global surface reconstruction. Methods are compared with asymptotic complexity analysis and it is concluded that direct ray tracing of point clouds can be computationally more efficient compared to global surface reconstruction
    • 

    corecore