3,580 research outputs found
Efficient image-based rendering
Recent advancements in real-time ray tracing and deep learning have significantly enhanced the realism of computer-generated images. However, conventional 3D computer graphics (CG) can still be time-consuming and resource-intensive, particularly when creating photo-realistic simulations of complex or animated scenes. Image-based rendering (IBR) has emerged as an alternative approach that utilizes pre-captured images from the real world to generate realistic images in real-time, eliminating the need for extensive modeling. Although IBR has its advantages, it faces challenges in providing the same level of control over scene attributes as traditional CG pipelines and accurately reproducing complex scenes and objects with different materials, such as transparent objects. This thesis endeavors to address these issues by harnessing the power of deep learning and incorporating the fundamental principles of graphics and physical-based rendering. It offers an efficient solution that enables interactive manipulation of real-world dynamic scenes captured from sparse views, lighting positions, and times, as well as a physically-based approach that facilitates accurate reproduction of the view dependency effect resulting from the interaction between transparent objects and their surrounding environment. Additionally, this thesis develops a visibility metric that can identify artifacts in the reconstructed IBR images without observing the reference image, thereby contributing to the design of an effective IBR acquisition pipeline. Lastly, a perception-driven rendering technique is developed to provide high-fidelity visual content in virtual reality displays while retaining computational efficiency.Jüngste Fortschritte im Bereich Echtzeit-Raytracing und Deep Learning haben den Realismus computergenerierter Bilder erheblich verbessert. Konventionelle 3DComputergrafik (CG) kann jedoch nach wie vor zeit- und ressourcenintensiv sein, insbesondere bei der Erstellung fotorealistischer Simulationen von komplexen oder animierten Szenen. Das bildbasierte Rendering (IBR) hat sich als alternativer Ansatz herauskristallisiert, bei dem vorab aufgenommene Bilder aus der realen Welt verwendet werden, um realistische Bilder in Echtzeit zu erzeugen, so dass keine umfangreiche Modellierung erforderlich ist. Obwohl IBR seine Vorteile hat, ist es eine Herausforderung, das gleiche Maß an Kontrolle über Szenenattribute zu bieten wie traditionelle CG-Pipelines und komplexe Szenen und Objekte mit unterschiedlichen Materialien, wie z.B. transparente Objekte, akkurat wiederzugeben. In dieser Arbeit wird versucht, diese Probleme zu lösen, indem die Möglichkeiten des Deep Learning genutzt und die grundlegenden Prinzipien der Grafik und des physikalisch basierten Renderings einbezogen werden. Sie bietet eine effiziente Lösung, die eine interaktive Manipulation von dynamischen Szenen aus der realen Welt ermöglicht, die aus spärlichen Ansichten, Beleuchtungspositionen und Zeiten erfasst wurden, sowie einen physikalisch basierten Ansatz, der eine genaue Reproduktion des Effekts der Sichtabhängigkeit ermöglicht, der sich aus der Interaktion zwischen transparenten Objekten und ihrer Umgebung ergibt. Darüber hinaus wird in dieser Arbeit eine Sichtbarkeitsmetrik entwickelt, mit der Artefakte in den rekonstruierten IBR-Bildern identifiziert werden können, ohne das Referenzbild zu betrachten, und die somit zur Entwicklung einer effektiven IBR-Erfassungspipeline beiträgt. Schließlich wird ein wahrnehmungsgesteuertes Rendering-Verfahren entwickelt, um visuelle Inhalte in Virtual-Reality-Displays mit hoherWiedergabetreue zu liefern und gleichzeitig die Rechenleistung zu erhalten
INFORMATION TECHNOLOGY FOR NEXT-GENERATION OF SURGICAL ENVIRONMENTS
Minimally invasive surgeries (MIS) are fundamentally constrained by image quality,access to the operative field, and the visualization environment on which thesurgeon relies for real-time information. Although invasive access benefits the patient,it also leads to more challenging procedures, which require better skills andtraining. Endoscopic surgeries rely heavily on 2D interfaces, introducing additionalchallenges due to the loss of depth perception, the lack of 3-Dimensional imaging,and the reduction of degrees of freedom.By using state-of-the-art technology within a distributed computational architecture,it is possible to incorporate multiple sensors, hybrid display devices, and3D visualization algorithms within a exible surgical environment. Such environmentscan assist the surgeon with valuable information that goes far beyond what iscurrently available. In this thesis, we will discuss how 3D visualization and reconstruction,stereo displays, high-resolution display devices, and tracking techniques arekey elements in the next-generation of surgical environments
CAwa-NeRF: Instant Learning of Compression-Aware NeRF Features
Modeling 3D scenes by volumetric feature grids is one of the promising
directions of neural approximations to improve Neural Radiance Fields (NeRF).
Instant-NGP (INGP) introduced multi-resolution hash encoding from a lookup
table of trainable feature grids which enabled learning high-quality neural
graphics primitives in a matter of seconds. However, this improvement came at
the cost of higher storage size. In this paper, we address this challenge by
introducing instant learning of compression-aware NeRF features (CAwa-NeRF),
that allows exporting the zip compressed feature grids at the end of the model
training with a negligible extra time overhead without changing neither the
storage architecture nor the parameters used in the original INGP paper.
Nonetheless, the proposed method is not limited to INGP but could also be
adapted to any model. By means of extensive simulations, our proposed instant
learning pipeline can achieve impressive results on different kinds of static
scenes such as single object masked background scenes and real-life scenes
captured in our studio. In particular, for single object masked background
scenes CAwa-NeRF compresses the feature grids down to 6% (1.2 MB) of the
original size without any loss in the PSNR (33 dB) or down to 2.4% (0.53 MB)
with a slight virtual loss (32.31 dB).Comment: 10 pages, 9 figure
PEA265: Perceptual Assessment of Video Compression Artifacts
The most widely used video encoders share a common hybrid coding framework
that includes block-based motion estimation/compensation and block-based
transform coding. Despite their high coding efficiency, the encoded videos
often exhibit visually annoying artifacts, denoted as Perceivable Encoding
Artifacts (PEAs), which significantly degrade the visual Qualityof- Experience
(QoE) of end users. To monitor and improve visual QoE, it is crucial to develop
subjective and objective measures that can identify and quantify various types
of PEAs. In this work, we make the first attempt to build a large-scale
subjectlabelled database composed of H.265/HEVC compressed videos containing
various PEAs. The database, namely the PEA265 database, includes 4 types of
spatial PEAs (i.e. blurring, blocking, ringing and color bleeding) and 2 types
of temporal PEAs (i.e. flickering and floating). Each containing at least
60,000 image or video patches with positive and negative labels. To objectively
identify these PEAs, we train Convolutional Neural Networks (CNNs) using the
PEA265 database. It appears that state-of-theart ResNeXt is capable of
identifying each type of PEAs with high accuracy. Furthermore, we define PEA
pattern and PEA intensity measures to quantify PEA levels of compressed video
sequence. We believe that the PEA265 database and our findings will benefit the
future development of video quality assessment methods and perceptually
motivated video encoders.Comment: 10 pages,15 figures,4 table
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Geometry Compression of 3D Static Point Clouds based on TSPLVQ
International audienceIn this paper, we address the challenging problem of the 3D point cloud compression required to ensure efficient transmission and storage. We introduce a new hierarchical geometry representation based on adaptive Tree-Structured Point-Lattice Vector Quantization (TSPLVQ). This representation enables hierarchically structured 3D content that improves the compression performance for static point cloud. The novelty of the proposed scheme lies in adaptive selection of the optimal quantization scheme of the geometric information, that better leverage the intrinsic correlations in point cloud. Based on its adaptive and multiscale structure, two quantization schemes are dedicated to project recursively the 3D point clouds into a series of embedded truncated cubic lattices. At each step of the process, the optimal quantization scheme is selected according to a rate-distortion cost in order to achieve the best trade-off between coding rate and geometry distortion, such that the compression flexibility and performance can be greatly improved. Experimental results show the interest of the proposed multi-scale method for lossy compression of geometry
- …