2,623 research outputs found
Efficient volumetric mapping of multi-scale environments using wavelet-based compression
Volumetric maps are widely used in robotics due to their desirable properties
in applications such as path planning, exploration, and manipulation. Constant
advances in mapping technologies are needed to keep up with the improvements in
sensor technology, generating increasingly vast amounts of precise
measurements. Handling this data in a computationally and memory-efficient
manner is paramount to representing the environment at the desired scales and
resolutions. In this work, we express the desirable properties of a volumetric
mapping framework through the lens of multi-resolution analysis. This shows
that wavelets are a natural foundation for hierarchical and multi-resolution
volumetric mapping. Based on this insight we design an efficient mapping system
that uses wavelet decomposition. The efficiency of the system enables the use
of uncertainty-aware sensor models, improving the quality of the maps.
Experiments on both synthetic and real-world data provide mapping accuracy and
runtime performance comparisons with state-of-the-art methods on both RGB-D and
3D LiDAR data. The framework is open-sourced to allow the robotics community at
large to explore this approach.Comment: 11 pages, 6 figures, 2 tables, accepted to RSS 2023, code is
open-source: https://github.com/ethz-asl/wavema
Efficient From-Point Visibility for Global Illumination in Virtual Scenes with Participating Media
Sichtbarkeitsbestimmung ist einer der fundamentalen Bausteine fotorealistischer Bildsynthese. Da die Berechnung der Sichtbarkeit allerdings รคuรerst kostspielig zu berechnen ist, wird nahezu die gesamte Berechnungszeit darauf verwendet. In dieser Arbeit stellen wir neue Methoden zur Speicherung, Berechnung und Approximation von Sichtbarkeit in Szenen mit streuenden Medien vor, die die Berechnung erheblich beschleunigen, dabei trotzdem qualitativ hochwertige und artefaktfreie Ergebnisse liefern
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Doctor of Philosophy
dissertationWhile boundary representations, such as nonuniform rational B-spline (NURBS) surfaces, have traditionally well served the needs of the modeling community, they have not seen widespread adoption among the wider engineering discipline. There is a common perception that NURBS are slow to evaluate and complex to implement. Whereas computer-aided design commonly deals with surfaces, the engineering community must deal with materials that have thickness. Traditional visualization techniques have avoided NURBS, and there has been little cross-talk between the rich spline approximation community and the larger engineering field. Recently there has been a strong desire to marry the modeling and analysis phases of the iterative design cycle, be it in car design, turbulent flow simulation around an airfoil, or lighting design. Research has demonstrated that employing a single representation throughout the cycle has key advantages. Furthermore, novel manufacturing techniques employing heterogeneous materials require the introduction of volumetric modeling representations. There is little question that fields such as scientific visualization and mechanical engineering could benefit from the powerful approximation properties of splines. In this dissertation, we remove several hurdles to the application of NURBS to problems in engineering and demonstrate how their unique properties can be leveraged to solve problems of interest
New Geometric Data Structures for Collision Detection
We present new geometric data structures for collision detection and more, including: Inner Sphere Trees - the first data structure to compute the peneration volume efficiently. Protosphere - an new algorithm to compute space filling sphere packings for arbitrary objects. Kinetic AABBs - a bounding volume hierarchy that is optimal in the number of updates when the objects deform. Kinetic Separation-List - an algorithm that is able to perform continuous collision detection for complex deformable objects in real-time. Moreover, we present applications of these new approaches to hand animation, real-time collision avoidance in dynamic environments for robots and haptic rendering, including a user study that exploits the influence of the degrees of freedom in complex haptic interactions. Last but not least, we present a new benchmarking suite for both, peformance and quality benchmarks, and a theoretic analysis of the running-time of bounding volume-based collision detection algorithms
From 3D Models to 3D Prints: an Overview of the Processing Pipeline
Due to the wide diffusion of 3D printing technologies, geometric algorithms
for Additive Manufacturing are being invented at an impressive speed. Each
single step, in particular along the Process Planning pipeline, can now count
on dozens of methods that prepare the 3D model for fabrication, while analysing
and optimizing geometry and machine instructions for various objectives. This
report provides a classification of this huge state of the art, and elicits the
relation between each single algorithm and a list of desirable objectives
during Process Planning. The objectives themselves are listed and discussed,
along with possible needs for tradeoffs. Additive Manufacturing technologies
are broadly categorized to explicitly relate classes of devices and supported
features. Finally, this report offers an analysis of the state of the art while
discussing open and challenging problems from both an academic and an
industrial perspective.Comment: European Union (EU); Horizon 2020; H2020-FoF-2015; RIA - Research and
Innovation action; Grant agreement N. 68044
์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง์์ ์ ์ง์ ๋ ์ฆ ์ํ๋ง์ ์ฌ์ฉํ ํผ์ฌ๊ณ ์ฌ๋ ๋ ๋๋ง
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2021. 2. ์ ์๊ธธ.Direct volume rendering is a widely used technique for extracting information from 3D scalar fields acquired by measurement or numerical simulation. To visualize the structure inside the volume, the voxels scalar value is often represented by a translucent color. This translucency of direct volume rendering makes it difficult to perceive the depth between the nested structures. Various volume rendering techniques to improve depth perception are mainly based on illustrative rendering techniques, and physically based rendering techniques such as depth of field effects are difficult to apply due to long computation time. With the development of immersive systems such as virtual and augmented reality and the growing interest in perceptually motivated medical visualization, it is necessary to implement depth of field in direct volume rendering.
This study proposes a novel method for applying depth of field effects to volume ray casting to improve the depth perception. By performing ray casting using multiple rays per pixel, objects at a distance in focus are sharply rendered and objects at an out-of-focus distance are blurred. To achieve these effects, a thin lens camera model is used to simulate rays passing through different parts of the lens. And an effective lens sampling method is used to generate an aliasing-free image with a minimum number of lens samples that directly affect performance. The proposed method is implemented without preprocessing based on the GPU-based volume ray casting pipeline. Therefore, all acceleration techniques of volume ray casting can be applied without restrictions.
We also propose multi-pass rendering using progressive lens sampling as an acceleration technique. More lens samples are progressively used for ray generation over multiple render passes. Each pixel has a different final render pass depending on the predicted maximum blurring size based on the circle of confusion. This technique makes it possible to apply a different number of lens samples for each pixel, depending on the degree of blurring of the depth of field effects over distance. This acceleration method reduces unnecessary lens sampling and increases the cache hit rate of the GPU, allowing us to generate the depth of field effects at interactive frame rates in direct volume rendering.
In the experiments using various data, the proposed method generated realistic depth of field effects in real time. These results demonstrate that our method produces depth of field effects with similar quality to the offline image synthesis method and is up to 12 times faster than the existing depth of field method in direct volume rendering.์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง(direct volume rendering, DVR)์ ์ธก์ ๋๋ ์์น ์๋ฎฌ๋ ์ด์
์ผ๋ก ์ป์ 3์ฐจ์ ๊ณต๊ฐ์ ์ค์นผ๋ผ ํ๋(3D scalar fields) ๋ฐ์ดํฐ์์ ์ ๋ณด๋ฅผ ์ถ์ถํ๋๋ฐ ๋๋ฆฌ ์ฌ์ฉ๋๋ ๊ธฐ์ ์ด๋ค. ๋ณผ๋ฅจ ๋ด๋ถ์ ๊ตฌ์กฐ๋ฅผ ๊ฐ์ํํ๊ธฐ ์ํด ๋ณต์
(voxel)์ ์ค์นผ๋ผ ๊ฐ์ ์ข
์ข
๋ฐํฌ๋ช
์ ์์์ผ๋ก ํํ๋๋ค. ์ด๋ฌํ ์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง์ ๋ฐํฌ๋ช
์ฑ์ ์ค์ฒฉ๋ ๊ตฌ์กฐ ๊ฐ ๊น์ด ์ธ์์ ์ด๋ ต๊ฒ ํ๋ค. ๊น์ด ์ธ์์ ํฅ์์ํค๊ธฐ ์ํ ๋ค์ํ ๋ณผ๋ฅจ ๋ ๋๋ง ๊ธฐ๋ฒ๋ค์ ์ฃผ๋ก ์ฝํํ ๋ ๋๋ง(illustrative rendering)์ ๊ธฐ๋ฐ์ผ๋ก ํ๋ฉฐ, ํผ์ฌ๊ณ ์ฌ๋(depth of field, DoF) ํจ๊ณผ์ ๊ฐ์ ๋ฌผ๋ฆฌ ๊ธฐ๋ฐ ๋ ๋๋ง(physically based rendering) ๊ธฐ๋ฒ๋ค์ ๊ณ์ฐ ์๊ฐ์ด ์ค๋ ๊ฑธ๋ฆฌ๊ธฐ ๋๋ฌธ์ ์ ์ฉ์ด ์ด๋ ต๋ค. ๊ฐ์ ๋ฐ ์ฆ๊ฐ ํ์ค๊ณผ ๊ฐ์ ๋ชฐ์
ํ ์์คํ
์ ๋ฐ์ ๊ณผ ์ธ๊ฐ์ ์ง๊ฐ์ ๊ธฐ๋ฐํ ์๋ฃ์์ ์๊ฐํ์ ๋ํ ๊ด์ฌ์ด ์ฆ๊ฐํจ์ ๋ฐ๋ผ ์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง์์ ํผ์ฌ๊ณ ์ฌ๋๋ฅผ ๊ตฌํํ ํ์๊ฐ ์๋ค.
๋ณธ ๋
ผ๋ฌธ์์๋ ์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง์ ๊น์ด ์ธ์์ ํฅ์์ํค๊ธฐ ์ํด ๋ณผ๋ฅจ ๊ด์ ํฌ์ฌ๋ฒ์ ํผ์ฌ๊ณ ์ฌ๋ ํจ๊ณผ๋ฅผ ์ ์ฉํ๋ ์๋ก์ด ๋ฐฉ๋ฒ์ ์ ์ํ๋ค. ํฝ์
๋น ์ฌ๋ฌ ๊ฐ์ ๊ด์ ์ ์ฌ์ฉํ ๊ด์ ํฌ์ฌ๋ฒ(ray casting)์ ์ํํ์ฌ ์ด์ ์ด ๋ง๋ ๊ฑฐ๋ฆฌ์ ์๋ ๋ฌผ์ฒด๋ ์ ๋ช
ํ๊ฒ ํํ๋๊ณ ์ด์ ์ด ๋ง์ง ์๋ ๊ฑฐ๋ฆฌ์ ์๋ ๋ฌผ์ฒด๋ ํ๋ฆฌ๊ฒ ํํ๋๋ค. ์ด๋ฌํ ํจ๊ณผ๋ฅผ ์ป๊ธฐ ์ํ์ฌ ๋ ์ฆ์ ์๋ก ๋ค๋ฅธ ๋ถ๋ถ์ ํต๊ณผํ๋ ๊ด์ ๋ค์ ์๋ฎฌ๋ ์ด์
ํ๋ ์์ ๋ ์ฆ ์นด๋ฉ๋ผ ๋ชจ๋ธ(thin lens camera model)์ด ์ฌ์ฉ๋์๋ค. ๊ทธ๋ฆฌ๊ณ ์ฑ๋ฅ์ ์ง์ ์ ์ผ๋ก ์ํฅ์ ๋ผ์น๋ ๋ ์ฆ ์ํ์ ์ต์ ์ ๋ ์ฆ ์ํ๋ง ๋ฐฉ๋ฒ์ ์ฌ์ฉํ์ฌ ์ต์ํ์ ๊ฐ์๋ฅผ ๊ฐ์ง๊ณ ์จ๋ฆฌ์ด์ฑ(aliasing)์ด ์๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ์๋ค. ์ ์ํ ๋ฐฉ๋ฒ์ ๊ธฐ์กด์ GPU ๊ธฐ๋ฐ ๋ณผ๋ฅจ ๊ด์ ํฌ์ฌ๋ฒ ํ์ดํ๋ผ์ธ ๋ด์์ ์ ์ฒ๋ฆฌ ์์ด ๊ตฌํ๋๋ค. ๋ฐ๋ผ์ ๋ณผ๋ฅจ ๊ด์ ํฌ์ฌ๋ฒ์ ๋ชจ๋ ๊ฐ์ํ ๊ธฐ๋ฒ์ ์ ํ์์ด ์ ์ฉํ ์ ์๋ค.
๋ํ ๊ฐ์ ๊ธฐ์ ๋ก ๋์ง ๋ ์ฆ ์ํ๋ง(progressive lens sampling)์ ์ฌ์ฉํ๋ ๋ค์ค ํจ์ค ๋ ๋๋ง(multi-pass rendering)์ ์ ์ํ๋ค. ๋ ๋ง์ ๋ ์ฆ ์ํ๋ค์ด ์ฌ๋ฌ ๋ ๋ ํจ์ค๋ค์ ๊ฑฐ์น๋ฉด์ ์ ์ง์ ์ผ๋ก ์ฌ์ฉ๋๋ค. ๊ฐ ํฝ์
์ ์ฐฉ๋์(circle of confusion)์ ๊ธฐ๋ฐ์ผ๋ก ์์ธก๋ ์ต๋ ํ๋ฆผ ์ ๋์ ๋ฐ๋ผ ๋ค๋ฅธ ์ต์ข
๋ ๋๋ง ํจ์ค๋ฅผ ๊ฐ๋๋ค. ์ด ๊ธฐ๋ฒ์ ๊ฑฐ๋ฆฌ์ ๋ฐ๋ฅธ ํผ์ฌ๊ณ ์ฌ๋ ํจ๊ณผ์ ํ๋ฆผ ์ ๋์ ๋ฐ๋ผ ๊ฐ ํฝ์
์ ๋ค๋ฅธ ๊ฐ์์ ๋ ์ฆ ์ํ์ ์ ์ฉํ ์ ์๊ฒ ํ๋ค. ์ด๋ฌํ ๊ฐ์ํ ๋ฐฉ๋ฒ์ ๋ถํ์ํ ๋ ์ฆ ์ํ๋ง์ ์ค์ด๊ณ GPU์ ์บ์(cache) ์ ์ค๋ฅ ์ ๋์ฌ ์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง์์ ์ํธ์์ฉ์ด ๊ฐ๋ฅํ ํ๋ ์ ์๋๋ก ํผ์ฌ๊ณ ์ฌ๋ ํจ๊ณผ๋ฅผ ๋ ๋๋ง ํ ์ ์๊ฒ ํ๋ค.
๋ค์ํ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ ์คํ์์ ์ ์ํ ๋ฐฉ๋ฒ์ ์ค์๊ฐ์ผ๋ก ์ฌ์ค์ ์ธ ํผ์ฌ๊ณ ์ฌ๋ ํจ๊ณผ๋ฅผ ์์ฑํ๋ค. ์ด๋ฌํ ๊ฒฐ๊ณผ๋ ์ฐ๋ฆฌ์ ๋ฐฉ๋ฒ์ด ์คํ๋ผ์ธ ์ด๋ฏธ์ง ํฉ์ฑ ๋ฐฉ๋ฒ๊ณผ ์ ์ฌํ ํ์ง์ ํผ์ฌ๊ณ ์ฌ๋ ํจ๊ณผ๋ฅผ ์์ฑํ๋ฉด์ ์ง์ ๋ณผ๋ฅจ ๋ ๋๋ง์ ๊ธฐ์กด ํผ์ฌ๊ณ ์ฌ๋ ๋ ๋๋ง ๋ฐฉ๋ฒ๋ณด๋ค ์ต๋ 12๋ฐฐ๊น์ง ๋น ๋ฅด๋ค๋ ๊ฒ์ ๋ณด์ฌ์ค๋ค.CHAPTER 1 INTRODUCTION 1
1.1 Motivation 1
1.2 Dissertation Goals 5
1.3 Main Contributions 6
1.4 Organization of Dissertation 8
CHAPTER 2 RELATED WORK 9
2.1 Depth of Field on Surface Rendering 10
2.1.1 Object-Space Approaches 11
2.1.2 Image-Space Approaches 15
2.2 Depth of Field on Volume Rendering 26
2.2.1 Blur Filtering on Slice-Based Volume Rendering 28
2.2.2 Stochastic Sampling on Volume Ray Casting 30
CHAPTER 3 DEPTH OF FIELD VOLUME RAY CASTING 33
3.1 Fundamentals 33
3.1.1 Depth of Field 34
3.1.2 Camera Models 36
3.1.3 Direct Volume Rendering 42
3.2 Geometry Setup 48
3.3 Lens Sampling Strategy 53
3.3.1 Sampling Techniques 53
3.3.2 Disk Mapping 57
3.4 CoC-Based Multi-Pass Rendering 60
3.4.1 Progressive Lens Sample Sequence 60
3.4.2 Final Render Pass Determination 62
CHAPTER 4 GPU IMPLEMENTATION 66
4.1 Overview 66
4.2 Rendering Pipeline 67
4.3 Focal Plane Transformation 74
4.4 Lens Sample Transformation 76
CHAPTER 5 EXPERIMENTAL RESULTS 78
5.1 Number of Lens Samples 79
5.2 Number of Render Passes 82
5.3 Render Pass Parameter 84
5.4 Comparison with Previous Methods 87
CHAPTER 6 CONCLUSION 97
Bibliography 101
Appendix 111Docto
- โฆ