101 research outputs found

    Compressing the illumination-adjustable images with principal component analysis.

    Get PDF
    Pun-Mo Ho.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 90-95).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Background --- p.1Chapter 1.2 --- Existing Approaches --- p.2Chapter 1.3 --- Our Approach --- p.3Chapter 1.4 --- Structure of the Thesis --- p.4Chapter 2 --- Related Work --- p.5Chapter 2.1 --- Compression for Navigation --- p.5Chapter 2.1.1 --- Light Field/Lumigraph --- p.5Chapter 2.1.2 --- Surface Light Field --- p.6Chapter 2.1.3 --- Concentric Mosaics --- p.6Chapter 2.1.4 --- On the Compression --- p.7Chapter 2.2 --- Compression for Relighting --- p.7Chapter 2.2.1 --- Previous Approaches --- p.7Chapter 2.2.2 --- Our Approach --- p.8Chapter 3 --- Image-Based Relighting --- p.9Chapter 3.1 --- Plenoptic Illumination Function --- p.9Chapter 3.2 --- Sampling and Relighting --- p.11Chapter 3.3 --- Overview --- p.13Chapter 3.3.1 --- Codec Overview --- p.13Chapter 3.3.2 --- Image Acquisition --- p.15Chapter 3.3.3 --- Experiment Data Sets --- p.16Chapter 4 --- Data Preparation --- p.18Chapter 4.1 --- Block Division --- p.18Chapter 4.2 --- Color Model --- p.23Chapter 4.3 --- Mean Extraction --- p.24Chapter 5 --- Principal Component Analysis --- p.29Chapter 5.1 --- Overview --- p.29Chapter 5.2 --- Singular Value Decomposition --- p.30Chapter 5.3 --- Dimensionality Reduction --- p.34Chapter 5.4 --- Evaluation --- p.37Chapter 6 --- Eigenimage Coding --- p.39Chapter 6.1 --- Transform Coding --- p.39Chapter 6.1.1 --- Discrete Cosine Transform --- p.40Chapter 6.1.2 --- Discrete Wavelet Transform --- p.47Chapter 6.2 --- Evaluation --- p.49Chapter 6.2.1 --- Statistical Evaluation --- p.49Chapter 6.2.2 --- Visual Evaluation --- p.52Chapter 7 --- Relighting Coefficient Coding --- p.57Chapter 7.1 --- Quantization and Bit Allocation --- p.57Chapter 7.2 --- Evaluation --- p.62Chapter 7.2.1 --- Statistical Evaluation --- p.62Chapter 7.2.2 --- Visual Evaluation --- p.62Chapter 8 --- Relighting --- p.65Chapter 8.1 --- Overview --- p.66Chapter 8.2 --- First-Phase Decoding --- p.66Chapter 8.3 --- Second-Phase Decoding --- p.68Chapter 8.3.1 --- Software Relighting --- p.68Chapter 8.3.2 --- Hardware-Assisted Relighting --- p.71Chapter 9 --- Overall Evaluation --- p.81Chapter 9.1 --- Compression of IAIs --- p.81Chapter 9.1.1 --- Statistical Evaluation --- p.81Chapter 9.1.2 --- Visual Evaluation --- p.86Chapter 9.2 --- Hardware-Assisted Relighting --- p.86Chapter 10 --- Conclusion --- p.89Bibliography --- p.9

    Inverse Rendering of Lambertian Surfaces Using Subspace Methods

    Full text link

    Surface analysis and visualization from multi-light image collections

    Get PDF
    Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and application areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to support the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that estimate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available benchmark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photometric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation

    Acquisition of Surface Light Fields from Videos

    Get PDF
    La tesi presenta un nuovo approccio per la stima di Surface Light Field di oggetti reali, a partire da sequenze video acquisite in condizioni di illuminazione fisse e non controllate. Il metodo proposto si basa sulla separazione delle due componenti principali dell'apparenza superficiale dell'oggetto: la componente diffusiva, modellata come colore RGB, e la componente speculare, approssimata mediante un modello parametrico funzione della posizione dell'osservatore. L'apparenza superficiale ricostruita permette una visualizzazione fotorealistica e in real-time dell'oggetto al variare della posizione dell'osservatore, consentendo una navigazione 3D interattiva

    Neural Reflectance Decomposition

    Get PDF
    Die Erstellung von fotorealistischen Modellen von Objekten aus Bildern oder Bildersammlungen ist eine grundlegende Herausforderung in der Computer Vision und Grafik. Dieses Problem wird auch als inverses Rendering bezeichnet. Eine der größten Herausforderungen bei dieser Aufgabe ist die vielfältige Ambiguität. Der Prozess Bilder aus 3D-Objekten zu erzeugen wird Rendering genannt. Allerdings beeinflussen sich mehrere Eigenschaften wie Form, Beleuchtung und die Reflektivität der Oberfläche gegenseitig. Zusätzlich wird eine Integration dieser Einflüsse durchgeführt, um das endgültige Bild zu erzeugen. Die Umkehrung dieser integrierten Abhängigkeiten ist eine äußerst schwierige und mehrdeutige Aufgabenstellung. Die Lösung dieser Aufgabe ist jedoch von entscheidender Bedeutung, da die automatisierte Erstellung solcher wieder beleuchtbaren Objekte verschiedene Anwendungen in den Bereichen Online-Shopping, Augmented Reality (AR), Virtual Reality (VR), Spiele oder Filme hat. In dieser Arbeit werden zwei Ansätze zur Lösung dieser Aufgabe beschrieben. Erstens wird eine Netzwerkarchitektur vorgestellt, die die Erfassung eines Objekts und dessen Materialien von zwei Aufnahmen ermöglicht. Der Grad der Blicksynthese von diesen Objekten ist jedoch begrenzt, da bei der Dekomposition nur eine einzige Perspektive verwendet wird. Daher wird eine zweite Reihe von Ansätzen vorgeschlagen, bei denen eine Sammlung von 360 Grad verteilten Bildern in die Form, Reflektanz und Beleuchtung gespalten werden. Diese Multi-View-Bilder werden pro Objekt optimiert. Das resultierende Objekt kann direkt in handelsüblicher Rendering-Software oder in Spielen verwendet werden. Wir erreichen dies, indem wir die aktuelle Forschung zu neuronalen Feldern erweitern Reflektanz zu speichern. Durch den Einsatz von Volumen-Rendering-Techniken können wir ein Reflektanzfeld aus natürlichen Bildsammlungen ohne jegliche Ground Truth (GT) Überwachung optimieren. Die von uns vorgeschlagenen Methoden erreichen eine erstklassige Qualität der Dekomposition und ermöglichen neuartige Aufnahmesituationen, in denen sich Objekte unter verschiedenen Beleuchtungsbedingungen oder an verschiedenen Orten befinden können, was üblich für Online-Bildsammlungen ist.Creating relightable objects from images or collections is a fundamental challenge in computer vision and graphics. This problem is also known as inverse rendering. One of the main challenges in this task is the high ambiguity. The creation of images from 3D objects is well defined as rendering. However, multiple properties such as shape, illumination, and surface reflectiveness influence each other. Additionally, an integration of these influences is performed to form the final image. Reversing these integrated dependencies is highly ill-posed and ambiguous. However, solving the task is essential, as automated creation of relightable objects has various applications in online shopping, augmented reality (AR), virtual reality (VR), games, or movies. In this thesis, we propose two approaches to solve this task. First, a network architecture is discussed, which generalizes the decomposition of a two-shot capture of an object from large training datasets. The degree of novel view synthesis is limited as only a singular perspective is used in the decomposition. Therefore, the second set of approaches is proposed, which decomposes a set of 360-degree images. These multi-view images are optimized per object, and the result can be directly used in standard rendering software or games. We achieve this by extending recent research on Neural Fields, which can store information in a 3D neural volume. Leveraging volume rendering techniques, we can optimize a reflectance field from in-the-wild image collections without any ground truth (GT) supervision. Our proposed methods achieve state-of-the-art decomposition quality and enable novel capture setups where objects can be under varying illumination or in different locations, which is typical for online image collections

    Enhancing Mesh Deformation Realism: Dynamic Mesostructure Detailing and Procedural Microstructure Synthesis

    Get PDF
    Propomos uma solução para gerar dados de mapas de relevo dinâmicos para simular deformações em superfícies macias, com foco na pele humana. A solução incorpora a simulação de rugas ao nível mesoestrutural e utiliza texturas procedurais para adicionar detalhes de microestrutura estáticos. Oferece flexibilidade além da pele humana, permitindo a geração de padrões que imitam deformações em outros materiais macios, como couro, durante a animação. As soluções existentes para simular rugas e pistas de deformação frequentemente dependem de hardware especializado, que é dispendioso e de difícil acesso. Além disso, depender exclusivamente de dados capturados limita a direção artística e dificulta a adaptação a mudanças. Em contraste, a solução proposta permite a síntese dinâmica de texturas que se adaptam às deformações subjacentes da malha de forma fisicamente plausível. Vários métodos foram explorados para sintetizar rugas diretamente na geometria, mas sofrem de limitações como auto-interseções e maiores requisitos de armazenamento. A intervenção manual de artistas na criação de mapas de rugas e mapas de tensão permite controle, mas pode ser limitada em deformações complexas ou onde maior realismo seja necessário. O nosso trabalho destaca o potencial dos métodos procedimentais para aprimorar a geração de padrões de deformação dinâmica, incluindo rugas, com maior controle criativo e sem depender de dados capturados. A incorporação de padrões procedimentais estáticos melhora o realismo, e a abordagem pode ser estendida além da pele para outros materiais macios.We propose a solution for generating dynamic heightmap data to simulate deformations for soft surfaces, with a focus on human skin. The solution incorporates mesostructure-level wrinkles and utilizes procedural textures to add static microstructure details. It offers flexibility beyond human skin, enabling the generation of patterns mimicking deformations in other soft materials, such as leater, during animation. Existing solutions for simulating wrinkles and deformation cues often rely on specialized hardware, which is costly and not easily accessible. Moreover, relying solely on captured data limits artistic direction and hinders adaptability to changes. In contrast, our proposed solution provides dynamic texture synthesis that adapts to underlying mesh deformations. Various methods have been explored to synthesize wrinkles directly to the geometry, but they suffer from limitations such as self-intersections and increased storage requirements. Manual intervention by artists using wrinkle maps and tension maps provides control but may be limited to the physics-based simulations. Our research presents the potential of procedural methods to enhance the generation of dynamic deformation patterns, including wrinkles, with greater creative control and without reliance on captured data. Incorporating static procedural patterns improves realism, and the approach can be extended to other soft-materials beyond skin

    A portable capturing system for image-based relighting.

    Get PDF
    Pang Wai Man.Thesis submitted in: July 2002.Thesis (M.Phil.)--Chinese University of Hong Kong, 2003.Includes bibliographical references (leaves 108-114).Abstracts in English and Chinese.Abstract --- p.iiAcknowledgments --- p.ivChapter 1 --- Introduction --- p.1Chapter 1.1 --- Image-based Rendering and Modeling --- p.1Chapter 1.1.1 --- Image-based versus Geometry-based --- p.5Chapter 1.2 --- Capturing for Graphics --- p.6Chapter 1.3 --- Organization of this Thesis --- p.8Chapter 2 --- Image-based Rendering and Relighting --- p.10Chapter 2.1 --- Theoretical Concepts --- p.11Chapter 2.1.1 --- Plenoptic Illumination Function --- p.11Chapter 2.1.2 --- Apparent BRDF --- p.13Chapter 2.1.3 --- Types of lighting --- p.14Chapter 2.1.4 --- Image superposition --- p.16Chapter 2.2 --- General Rendering Pipeline --- p.18Chapter 2.3 --- Rendering Techniques --- p.21Chapter 2.3.1 --- Nearest Neighbours and Interpolation --- p.21Chapter 2.3.2 --- Image Warping --- p.23Chapter 2.4 --- IBR Representations and applications --- p.26Chapter 2.4.1 --- Navigation --- p.28Chapter 2.4.2 --- Relighting Representations --- p.35Chapter 2.4.3 --- High Dynamic Range Imaging --- p.38Chapter 2.5 --- Chapter Summary --- p.42Chapter 3 --- Capturing Methods --- p.44Chapter 3.1 --- Spatial Tracking Approaches --- p.45Chapter 3.1.1 --- Mechanical based Method --- p.46Chapter 3.1.2 --- Electromagnetic based Method --- p.48Chapter 3.1.3 --- Vision based Method --- p.50Chapter 3.1.4 --- Comparison --- p.51Chapter 3.2 --- High Dynamic Range Imaging --- p.53Chapter 3.2.1 --- Successive Exposure Capturing --- p.53Chapter 3.2.2 --- Spatial Varing Filter --- p.53Chapter 3.2.3 --- Special Designed Hardware --- p.55Chapter 3.3 --- Chapter Summary --- p.56Chapter 4 --- System Design and Implementation --- p.58Chapter 4.1 --- System Overview --- p.58Chapter 4.2 --- The Setup --- p.60Chapter 4.3 --- Capturing Procedures --- p.61Chapter 4.3.1 --- Calibrations --- p.61Chapter 4.4 --- Vision based tracking --- p.64Chapter 4.4.1 --- The pin-hole camera model --- p.65Chapter 4.4.2 --- Basics of Camera Calibration --- p.66Chapter 4.5 --- Light Vector Tracking --- p.70Chapter 4.5.1 --- The Transformations --- p.70Chapter 4.5.2 --- Tracking Accuracy --- p.71Chapter 4.5.3 --- Tracking Range Enlargement --- p.72Chapter 4.6 --- Capturing Experiment --- p.74Chapter 4.7 --- Sampling Analysis --- p.74Chapter 4.8 --- Chapter Summary --- p.78Chapter 5 --- Data Postprocessing --- p.80Chapter 5.1 --- Scattered Data Fitting --- p.81Chapter 5.1.1 --- Spherical Delaunay Triangulation --- p.83Chapter 5.1.2 --- Interpolation on Sphere --- p.86Chapter 5.2 --- Compression --- p.88Chapter 5.3 --- Chapter Summary --- p.90Chapter 6 --- Relit Results --- p.91Chapter 6.1 --- Relighting with Multiple Directional Lights --- p.92Chapter 6.2 --- Relighting with Environmental Maps --- p.94Chapter 7 --- Conclusion --- p.101Chapter 7.1 --- Future Research Aspect --- p.102Chapter A --- System User Guide --- p.104Chapter A.1 --- Equipment Configuration --- p.104Chapter A.2 --- Operation Guide --- p.105Chapter A.3 --- Software Components --- p.106Chapter A.3.1 --- Image capturing - lightcap --- p.106Chapter A.3.2 --- Raw Frame Extraction ´ؤ lfprocess --- p.107Chapter A.3.3 --- Resampling and Compression - svscatterppm2urdf . --- p.107Bibliography --- p.10
    corecore