1,672 research outputs found
Large-Scale Pixel-Precise Deferred Vector Maps
Rendering vector maps is a key challenge for high‐quality geographic visualization systems. In this paper, we present a novel approach to visualize vector maps over detailed terrain models in a pixel‐precise way. Our method proposes a deferred line rendering technique to display vector maps directly in a screen‐space shading stage over the 3D terrain visualization. Due to the absence of traditional geometric polygonal rendering, our algorithm is able to outperform conventional vector map rendering algorithms for geographic information systems, and supports advanced line anti‐aliasing as well as slope distortion correction. Furthermore, our deferred line rendering enables interactively customizable advanced vector styling methods as well as a tool for interactive pixel‐based editing operations
LOCALIS: Locally-adaptive Line Simplification for GPU-based Geographic Vector Data Visualization
Visualization of large vector line data is a core task in geographic and
cartographic systems. Vector maps are often displayed at different cartographic
generalization levels, traditionally by using several discrete levels-of-detail
(LODs). This limits the generalization levels to a fixed and predefined set of
LODs, and generally does not support smooth LOD transitions. However, fast GPUs
and novel line rendering techniques can be exploited to integrate dynamic
vector map LOD management into GPU-based algorithms for locally-adaptive line
simplification and real-time rendering. We propose a new technique that
interactively visualizes large line vector datasets at variable LODs. It is
based on the Douglas-Peucker line simplification principle, generating an
exhaustive set of line segments whose specific subsets represent the lines at
any variable LOD. At run time, an appropriate and view-dependent error metric
supports screen-space adaptive LOD levels and the display of the correct subset
of line segments accordingly. Our implementation shows that we can simplify and
display large line datasets interactively. We can successfully apply line style
patterns, dynamic LOD selection lenses, and anti-aliasing techniques to our
line rendering
The Iray Light Transport Simulation and Rendering System
While ray tracing has become increasingly common and path tracing is well
understood by now, a major challenge lies in crafting an easy-to-use and
efficient system implementing these technologies. Following a purely
physically-based paradigm while still allowing for artistic workflows, the Iray
light transport simulation and rendering system allows for rendering complex
scenes by the push of a button and thus makes accurate light transport
simulation widely available. In this document we discuss the challenges and
implementation choices that follow from our primary design decisions,
demonstrating that such a rendering system can be made a practical, scalable,
and efficient real-world application that has been adopted by various companies
across many fields and is in use by many industry professionals today
Komparatiivinen arviointi kiiltävien pintojen valaistustuloksista mallintilan valaistuksen ja ruuduntilan valaistuksen välillä
The field of computer graphics places a premium on obtaining an optimal balance between the fidelity of visual of representation and the performance of rendering. The level of fidelity for traditional shading techniques that operate in screen-space is generally related to the screen resolution and thus the number of pixels that we render. Special application areas, such as stereo rendering for virtual reality head-mounted displays, demand high output update rates and screen pixel resolutions which can then lead to significant performance penalties. This means that it would be beneficial to utilize a rendering technique which could be decoupled from the output update rate and resolution, without too severely affecting the achieved rendering quality.
One technique capable of meeting this goal is that of performing a 3D model's surface shading in an object-specific space. In this thesis we have implemented such a shading method, with the lighting computations over a model's surface being done on a model-specific, uniquely parameterized texture map we call a light map. As the shading is computed per light map texel, the costs do not depend on the output resolution or update rate. Additionally, we utilize the texture sampling hardware built into the Graphics Processing Units ubiquitous in modern computing systems to gain high quality anti-aliasing on the shading results. The end result is a surface appearance that is expected to theoretically be close to those resulting from highly supersampled screen-space shading techniques.
In addition to the object-space lighting technique, we also implemented a traditional screen-space version of our shading algorithm. Both of these techniques were used in a user study we organized to test against the theoretical expectation. The results from the study indicated that the object-space shaded images are perceptually close to identical compared to heavily supersampled screen-space images.Tietokonegrafiikan alalla optimaalisen tasapainon saavuttaminen kuvanlaadun ja laskentanopeuden välillä on keskeisessä asemassa. Perinteisillä, kuvaruuduntilassa toimivilla valaistusalgoritmeilla kuvanlaatu on tyypillisesti riippuvainen käytetyn piirtoikkunan erottelutarkkuudesta ja näin ollen kuvaelementtien kokonaismäärästä. Tietyt sovellusalueet, kuten stereopiirtäminen keinotodellisuussovelluksille, edellyttävät korkeata ruudunpäivitystaajuutta sekä erottelutarkkuutta, mikä taas johtaa laskentatehovaatimusten kasvuun. Näin ollen on tarkoituksenmukaista hyödyntää algoritmeja, joissa valaistuslaskenta saataisiin erotettua näistä ominaisuuksista ilman merkittävää kuvanlaadun heikkenemistä.
Yksi algoritmikategoria, joka täyttää nämä asetetut vaatimukset on valaistuslaskenta 3D-mallikohtaisessa tilassa. Tämän diplomityön puitteissa olemme toteuttaneet tähän kategoriaan lukeutuvan valaistusalgoritmin, jossa valaistuslaskenta suoritetaan mallikohtaisella, yksikäsitteisesti parametrisoidulla tekstuurikartalla. Tämä tarkoittaa, että valaistuslaskennasta koituvat suorituskykykustannukset eivät ole riippuvaisia aiemmin mainituista ruudun ominaisuuksista. Valaistuslaskenta yksilöllisiin tekstuurikarttoihin mahdollistaa näytönohjaimiin sisäänrakennetun teksturointilaitteiston käyttämisen korkealaatuiseen valaistustulosten suodattamiseen. Lopputuloksena saavutetaan piirretty kuva, jonka teoreettisesti oletetaan olevan laadultaan lähellä merkittävästi ylinäytteistettyä ruuduntilan valaistusalgoritmeille saavutettuja tuloksia.
Mallikohtaisen tilan valaistusalgoritmin lisäksi toteutimme perinteisen ruuduntilan valaistusalgoritmiversion. Molempia toteutuksia käytettiin järjestämässämme käyttäjätestissä, jonka tavoitteena oli testata toteutuuko mainittu teoreettinen oletus käytännössä. Käyttäjätestin tulokset viittasivat vahvasti oletuksen pätevyyteen, käyttäjien kokonaisvaltaisesti kokien ylinäytteistetyn perinteisen valaistuslaskennan tulokset lähes identtisiksi mallintilan valaistuslaskennan tuloksiin
Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering
The rendering scheme in neural radiance field (NeRF) is effective in
rendering a pixel by casting a ray into the scene. However, NeRF yields blurred
rendering results when the training images are captured at non-uniform scales,
and produces aliasing artifacts if the test images are taken in distant views.
To address this issue, Mip-NeRF proposes a multiscale representation as a
conical frustum to encode scale information. Nevertheless, this approach is
only suitable for offline rendering since it relies on integrated positional
encoding (IPE) to query a multilayer perceptron (MLP). To overcome this
limitation, we propose mip voxel grids (Mip-VoG), an explicit multiscale
representation with a deferred architecture for real-time anti-aliasing
rendering. Our approach includes a density Mip-VoG for scene geometry and a
feature Mip-VoG with a small MLP for view-dependent color. Mip-VoG encodes
scene scale using the level of detail (LOD) derived from ray differentials and
uses quadrilinear interpolation to map a queried 3D location to its features
and density from two neighboring downsampled voxel grids. To our knowledge, our
approach is the first to offer multiscale training and real-time anti-aliasing
rendering simultaneously. We conducted experiments on multiscale datasets, and
the results show that our approach outperforms state-of-the-art real-time
rendering baselines
Research Proposal for an Experiment to Search for the Decay {\mu} -> eee
We propose an experiment (Mu3e) to search for the lepton flavour violating
decay mu+ -> e+e-e+. We aim for an ultimate sensitivity of one in 10^16
mu-decays, four orders of magnitude better than previous searches. This
sensitivity is made possible by exploiting modern silicon pixel detectors
providing high spatial resolution and hodoscopes using scintillating fibres and
tiles providing precise timing information at high particle rates.Comment: Research proposal submitted to the Paul Scherrer Institute Research
Committee for Particle Physics at the Ring Cyclotron, 104 page
- …