570 research outputs found

    BxDF material acquisition, representation, and rendering for VR and design

    Get PDF
    Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering

    Bidirectional Texture Functions: Acquisition, Rendering and Quality Evaluation

    Get PDF
    As one of its primary objectives, Computer Graphics aims at the simulation of fabrics’ complex reflection behaviour. Characteristic surface reflectance of fabrics, such as highlights, anisotropy or retro-reflection arise the difficulty of synthesizing. This problem can be solved by using Bidirectional Texture Functions (BTFs), a 2D-texture under various light and view direction. But the acquisition of Bidirectional Texture Functions requires an expensive setup and the measurement process is very time-consuming. Moreover, the size of BTF data can range from hundreds of megabytes to several gigabytes, as a large number of high resolution pictures have to be used in any ideal cases. Furthermore, the three-dimensional textured models rendered through BTF rendering method are subject to various types of distortion during acquisition, synthesis, compression, and processing. An appropriate image quality assessment scheme is a useful tool for evaluating image processing algorithms, especially algorithms designed to leave the image visually unchanged. In this contribution, we present and conduct an investigation aimed at locating a robust threshold for downsampling BTF images without loosing perceptual quality. To this end, an experimental study on how decreasing the texture resolution influences perceived quality of the rendered images has been presented and discussed. Next, two basic improvements to the use of BTFs for rendering are presented: firstly, the study addresses the cost of BTF acquisition by introducing a flexible low-cost step motor setup for BTF acquisition allowing to generate a high quality BTF database taken at user-defined arbitrary angles. Secondly, the number of acquired textures to the perceptual quality of renderings is adapted so that the database size is not overloaded and can fit better in memory when rendered. Although visual attention is one of the essential attributes of HVS, it is neglected in most existing quality metrics. In this thesis an appropriate objective quality metric based on extracting visual attention regions from images and adequate investigation of the influence of visual attention on perceived image quality assessment, called Visual Attention Based Image Quality Metric (VABIQM), has been proposed. The novel metric indicates that considering visual saliency can offer significant benefits with regard to constructing objective quality metrics to predict the visible quality differences in images rendered by compressed and non-compressed BTFs and also outperforms straightforward existing image quality metrics at detecting perceivable differences

    BRDF representation and acquisition

    Get PDF
    Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials

    BRDF Representation and Acquisition

    Get PDF
    Photorealistic rendering of real world environments is important in a range of different areas; including Visual Special effects, Interior/Exterior Modelling, Architectural Modelling, Cultural Heritage, Computer Games and Automotive Design. Currently, rendering systems are able to produce photorealistic simulations of the appearance of many real-world materials. In the real world, viewer perception of objects depends on the lighting and object/material/surface characteristics, the way a surface interacts with the light and on how the light is reflected, scattered, absorbed by the surface and the impact these characteristics have on material appearance. In order to re-produce this, it is necessary to understand how materials interact with light. Thus the representation and acquisition of material models has become such an active research area. This survey of the state-of-the-art of BRDF Representation and Acquisition presents an overview of BRDF (Bidirectional Reflectance Distribution Function) models used to represent surface/material reflection characteristics, and describes current acquisition methods for the capture and rendering of photorealistic materials

    Acquisition and modeling of material appearance

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 131-143).In computer graphics, the realistic rendering of synthetic scenes requires a precise description of surface geometry, lighting, and material appearance. While 3D geometry scanning and modeling have advanced significantly in recent years, measurement and modeling of accurate material appearance have remained critical challenges. Analytical models are the main tools to describe material appearance in most current applications. They provide compact and smooth approximations to real materials but lack the expressiveness to represent complex materials. Data-driven approaches based on exhaustive measurements are fully general but the measurement process is difficult and the storage requirement is very high. In this thesis, we propose the use of hybrid representations that are more compact and easier to acquire than exhaustive measurement, while preserving much generality of a data-driven approach. To represent complex bidirectional reflectance distribution functions (BRDFs), we present a new method to estimate a general microfacet distribution from measured data. We show that this representation is able to reproduce complex materials that are impossible to model with purely analytical models.(cont.) We also propose a new method that significantly reduces measurement cost and time of the bidirectional texture function (BTF) through a statistical characterization of texture appearance. Our reconstruction method combines naturally aligned images and alignment-insensitive statistics to produce visually plausible results. We demonstrate our acquisition system which is able to capture intricate materials like fabrics in less than ten minutes with commodity equipments. In addition, we present a method to facilitate effective user design in the space of material appearance. We introduce a metric in the space of reflectance which corresponds roughly to perceptual measures. The main idea of our approach is to evaluate reflectance differences in terms of their induced rendered images, instead of the reflectance function itself defined in the angular domains. With rendered images, we show that even a simple computational metric can provide good perceptual spacing and enable intuitive navigation of the reflectance space.by Wai Kit Addy Ngan.Ph.D

    Realistic Visualization of Animated Virtual Cloth

    Get PDF
    Photo-realistic rendering of real-world objects is a broad research area with applications in various different areas, such as computer generated films, entertainment, e-commerce and so on. Within photo-realistic rendering, the rendering of cloth is a subarea which involves many important aspects, ranging from material surface reflection properties and macroscopic self-shadowing to animation sequence generation and compression. In this thesis, besides an introduction to the topic plus a broad overview of related work, different methods to handle major aspects of cloth rendering are described. Material surface reflection properties play an important part to reproduce the look & feel of materials, that is, to identify a material only by looking at it. The BTF (bidirectional texture function), as a function of viewing and illumination direction, is an appropriate representation of reflection properties. It captures effects caused by the mesostructure of a surface, like roughness, self-shadowing, occlusion, inter-reflections, subsurface scattering and color bleeding. Unfortunately a BTF data set of a material consists of hundreds to thousands of images, which exceeds current memory size of personal computers by far. This work describes the first usable method to efficiently compress and decompress a BTF data for rendering at interactive to real-time frame rates. It is based on PCA (principal component analysis) of the BTF data set. While preserving the important visual aspects of the BTF, the achieved compression rates allow the storage of several different data sets in main memory of consumer hardware, while maintaining a high rendering quality. Correct handling of complex illumination conditions plays another key role for the realistic appearance of cloth. Therefore, an upgrade of the BTF compression and rendering algorithm is described, which allows the support of distant direct HDR (high-dynamic-range) illumination stored in environment maps. To further enhance the appearance, macroscopic self-shadowing has to be taken into account. For the visualization of folds and the life-like 3D impression, these kind of shadows are absolutely necessary. This work describes two methods to compute these shadows. The first is seamlessly integrated into the illumination part of the rendering algorithm and optimized for static meshes. Furthermore, another method is proposed, which allows the handling of dynamic objects. It uses hardware-accelerated occlusion queries for the visibility determination. In contrast to other algorithms, the presented algorithm, despite its simplicity, is fast and produces less artifacts than other methods. As a plus, it incorporates changeable distant direct high-dynamic-range illumination. The human perception system is the main target of any computer graphics application and can also be treated as part of the rendering pipeline. Therefore, optimization of the rendering itself can be achieved by analyzing human perception of certain visual aspects in the image. As a part of this thesis, an experiment is introduced that evaluates human shadow perception to speedup shadow rendering and provides optimization approaches. Another subarea of cloth visualization in computer graphics is the animation of the cloth and avatars for presentations. This work also describes two new methods for automatic generation and compression of animation sequences. The first method to generate completely new, customizable animation sequences, is based on the concept of finding similarities in animation frames of a given basis sequence. Identifying these similarities allows jumps within the basis sequence to generate endless new sequences. Transmission of any animated 3D data over bandwidth-limited channels, like extended networks or to less powerful clients requires efficient compression schemes. The second method included in this thesis in the animation field is a geometry data compression scheme. Similar to the BTF compression, it uses PCA in combination with clustering algorithms to segment similar moving parts of the animated objects to achieve high compression rates in combination with a very exact reconstruction quality.Realistische Visualisierung von animierter virtueller Kleidung Das photorealistisches Rendering realer GegenstĂ€nde ist ein weites Forschungsfeld und hat Anwendungen in vielen Bereichen. Dazu zĂ€hlen Computer generierte Filme (CGI), die Unterhaltungsindustrie und E-Commerce. Innerhalb dieses Forschungsbereiches ist das Rendern von photorealistischer Kleidung ein wichtiger Bestandteil. Hier reichen die wichtigen Aspekte, die es zu berĂŒcksichtigen gilt, von optischen Materialeigenschaften ĂŒber makroskopische Selbstabschattung bis zur Animationsgenerierung und -kompression. In dieser Arbeit wird, neben der EinfĂŒhrung in das Thema, ein weiter Überblick ĂŒber Ă€hnlich gelagerte Arbeiten gegeben. Der Schwerpunkt der Arbeit liegt auf den wichtigen Aspekten der virtuellen Kleidungsvisualisierung, die oben beschrieben wurden. Die optischen Reflektionseigenschaften von MaterialoberflĂ€chen spielen eine wichtige Rolle, um das so genannte look & feel von Materialien zu charakterisieren. Hierbei kann ein Material vom Nutzer identifiziert werden, ohne dass er es direkt anfassen muss. Die BTF (bidirektionale Texturfunktion)ist eine Funktion die abhĂ€ngig von der Blick- und Beleuchtungsrichtung ist. Daher ist sie eine angemessene ReprĂ€sentation von Reflektionseigenschaften. Sie enthĂ€lt Effekte wie Rauheit, Selbstabschattungen, Verdeckungen, Interreflektionen, Streuung und Farbbluten, die durch die Mesostruktur der OberflĂ€che hervorgerufen werden. Leider besteht ein BTF Datensatz eines Materials aus hunderten oder tausenden von Bildern und sprengt damit herkömmliche Hauptspeicher in Computern bei weitem. Diese Arbeit beschreibt die erste praktikable Methode, um BTF Daten effizient zu komprimieren, zu speichern und fĂŒr Echtzeitanwendungen zum Visualisieren wieder zu dekomprimieren. Die Methode basiert auf der Principal Component Analysis (PCA), die Daten nach Signifikanz ordnet. WĂ€hrend die PCA die entscheidenen visuellen Aspekte der BTF erhĂ€lt, können mit ihrer Hilfe Kompressionsraten erzielt werden, die es erlauben mehrere BTF Materialien im Hauptspeicher eines Consumer PC zu verwalten. Dies erlaubt ein High-Quality Rendering. Korrektes Verwenden von komplexen Beleuchtungssituationen spielt eine weitere, wichtige Rolle, um Kleidung realistisch erscheinen zu lassen. Daher wird zudem eine Erweiterung des BTF Kompressions- und Renderingalgorithmuses erlĂ€utert, die den Einsatz von High-Dynamic Range (HDR) Beleuchtung erlaubt, die in environment maps gespeichert wird. Um die realistische Erscheinung der Kleidung weiter zu unterstĂŒtzen, muss die makroskopische Selbstabschattung integriert werden. FĂŒr die Visualisierung von Falten und den lebensechten 3D Eindruck ist diese Art von Schatten absolut notwendig. Diese Arbeit beschreibt daher auch zwei Methoden, diese Schatten schnell und effizient zu berechnen. Die erste ist nahtlos in den Beleuchtungspart des obigen BTF Renderingalgorithmuses integriert und fĂŒr statische Geometrien optimiert. Die zweite Methode behandelt dynamische Objekte. Dazu werden hardwarebeschleunigte Occlusion Queries verwendet, um die Sichtbarkeitsberechnung durchzufĂŒhren. Diese Methode ist einerseits simpel und leicht zu implementieren, anderseits ist sie schnell und produziert weniger Artefakte, als vergleichbare Methoden. ZusĂ€tzlich ist die Verwendung von verĂ€nderbarer, entfernter HDR Beleuchtung integriert. Das menschliche Wahrnehmungssystem ist das eigentliche Ziel jeglicher Anwendung in der Computergrafik und kann daher selbst als Teil einer erweiterten Rendering Pipeline gesehen werden. Daher kann das Rendering selbst optimiert werden, wenn man die menschliche Wahrnehmung verschiedener visueller Aspekte der berechneten Bilder analysiert. Teil der vorliegenden Arbeit ist die Beschreibung eines Experimentes, das menschliche Schattenwahrnehmung untersucht, um das Rendern der Schatten zu beschleunigen. Ein weiteres Teilgebiet der Kleidungsvisualisierung in der Computergrafik ist die Animation der Kleidung und von Avataren fĂŒr PrĂ€sentationen. Diese Arbeit beschreibt zwei neue Methoden auf diesem Teilgebiet. Einmal ein Algorithmus, der fĂŒr die automatische Generierung neuer Animationssequenzen verwendet werden kann und zum anderen einen Kompressionsalgorithmus fĂŒr eben diese Sequenzen. Die automatische Generierung von völlig neuen, anpassbaren Animationen basiert auf dem Konzept der Ähnlichkeitssuche. Hierbei werden die einzelnen Schritte von gegebenen Basisanimationen auf Ähnlichkeiten hin untersucht, die zum Beispiel die Geschwindigkeiten einzelner Objektteile sein können. Die Identifizierung dieser Ähnlichkeiten erlaubt dann SprĂŒnge innerhalb der Basissequenz, die dazu benutzt werden können, endlose, neue Sequenzen zu erzeugen. Die Übertragung von animierten 3D Daten ĂŒber bandbreitenlimitierte KanĂ€le wie ausgedehnte Netzwerke, Mobilfunk oder zu sogenannten thin clients erfordert eine effiziente Komprimierung. Die zweite, in dieser Arbeit vorgestellte Methode, ist ein Kompressionsschema fĂŒr Geometriedaten. Ähnlich wie bei der Kompression von BTF Daten wird die PCA in Verbindung mit Clustering benutzt, um die animierte Geometrie zu analysieren und in sich Ă€hnlich bewegende Teile zu segmentieren. Diese erkannten Segmente lassen sich dann hoch komprimieren. Der Algorithmus arbeitet automatisch und erlaubt zudem eine sehr exakte RekonstruktionsqualitĂ€t nach der Dekomprimierung

    Acquisition, Modeling, and Augmentation of Reflectance for Synthetic Optical Flow Reference Data

    Get PDF
    This thesis is concerned with the acquisition, modeling, and augmentation of material reflectance to simulate high-fidelity synthetic data for computer vision tasks. The topic is covered in three chapters: I commence with exploring the upper limits of reflectance acquisition. I analyze state-of-the-art BTF reflectance field renderings and show that they can be applied to optical flow performance analysis with closely matching performance to real-world images. Next, I present two methods for fitting efficient BRDF reflectance models to measured BTF data. Both methods combined retain all relevant reflectance information as well as the surface normal details on a pixel level. I further show that the resulting synthesized images are suited for optical flow performance analysis, with a virtually identical performance for all material types. Finally, I present a novel method for augmenting real-world datasets with physically plausible precipitation effects, including ground surface wetting, water droplets on the windshield, and water spray and mists. This is achieved by projecting the realworld image data onto a reconstructed virtual scene, manipulating the scene and the surface reflectance, and performing unbiased light transport simulation of the precipitation effects

    Interactive real-time three-dimensional visualisation of virtual textiles

    Get PDF
    Virtual textile databases provide a cost-efficient alternative to the use of existing hardcover sample catalogues. By taking advantage of the high performance features offered by the latest generation of programmable graphics accelerator boards, it is possible to combine photometric stereo methods with 3D visualisation methods to implement a virtual textile database. In this thesis, we investigate and combine rotation invariant texture retrieval with interactive visualisation techniques. We use a 3D surface representation that is a generic data representation that allows us to combine real-time interactive 3D visualisation methods with present day texture retrieval methods. We begin by investigating the most suitable data format for the 3D surface representation and identify relief-mapping combined with BĂ©zier surfaces as the most suitable 3D surface representations for our needs, and go on to describe how these representation can be combined for real-time rendering. We then investigate ten different methods of implementing rotation invariant texture retrieval using feature vectors. These results show that first order statistics in the form of histogram data are very effective for discriminating colour albedo information, while rotation invariant gradient maps are effective for distinguishing between different types of micro-geometry using either first or second order statistics.Engineering and physical Sciences Research (EPSRC

    Image based surface reflectance remapping for consistent and tool independent material appearence

    Get PDF
    Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation. A number of material models have been developed, since no model is currently able to reproduce the full range of available materials. Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling workflow, leading to a heavy tool dependency of material appearance. In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several departments, which exchange data. Unfortunately, exchanged data often tends to differ from the original, when imported into a different application. As a result, delivering consistent visual results requires time, labour and computational cost. This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency. Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders. In particular, two image-based frameworks are proposed, working under these constraints. The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available. The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown. It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference
    • 

    corecore