537 research outputs found
VolumeEVM: A new surface/volume integrated model
Volume visualization is a very active research area in the field of scien-tific
visualization. The Extreme Vertices Model (EVM) has proven to be
a complete intermediate model to visualize and manipulate volume data
using a surface rendering approach. However, the ability to integrate the
advantages of surface rendering approach with the superiority in visual exploration
of the volume rendering would actually produce a very complete
visualization and edition system for volume data. Therefore, we decided
to define an enhanced EVM-based model which incorporates the volumetric
information required to achieved a nearly direct volume visualization
technique. Thus, VolumeEVM was designed maintaining the same EVM-based
data structure plus a sorted list of density values corresponding to
the EVM-based VoIs interior voxels. A function which relates interior
voxels of the EVM with the set of densities was mandatory to be defined.
This report presents the definition of this new surface/volume integrated
model based on the well known EVM encoding and propose implementations
of the main software-based direct volume rendering techniques
through the proposed model.Postprint (published version
Recommended from our members
EWA Splatting
In this paper, we present a framework for high quality splatting based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter, combining a reconstruction kernel with a low-pass filter. Because of the similarity to Heckbert's EWA (elliptical weighted average) filter for texture mapping, we call our technique EWA splatting. Our framework allows us to derive EWA splat primitives for volume data and for point-sampled surface data. It provides high image quality without aliasing artifacts or excessive blurring for volume data and, additionally, features anisotropic texture filtering for point-sampled surfaces. It also handles nonspherical volume kernels efficiently; hence, it is suitable for regular, rectilinear, and irregular volume datasets. Moreover, our framework introduces a novel approach to compute the footprint function, facilitating efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in rendering surface and volume data.Engineering and Applied Science
Simple-BEV: What Really Matters for Multi-Sensor BEV Perception?
Building 3D perception systems for autonomous vehicles that do not rely on
high-density LiDAR is a critical research problem because of the expense of
LiDAR systems compared to cameras and other sensors. Recent research has
developed a variety of camera-only methods, where features are differentiably
"lifted" from the multi-camera images onto the 2D ground plane, yielding a
"bird's eye view" (BEV) feature representation of the 3D space around the
vehicle. This line of work has produced a variety of novel "lifting" methods,
but we observe that other details in the training setups have shifted at the
same time, making it unclear what really matters in top-performing methods. We
also observe that using cameras alone is not a real-world constraint,
considering that additional sensors like radar have been integrated into real
vehicles for years already. In this paper, we first of all attempt to elucidate
the high-impact factors in the design and training protocol of BEV perception
models. We find that batch size and input resolution greatly affect
performance, while lifting strategies have a more modest effect -- even a
simple parameter-free lifter works well. Second, we demonstrate that radar data
can provide a substantial boost to performance, helping to close the gap
between camera-only and LiDAR-enabled systems. We analyze the radar usage
details that lead to good performance, and invite the community to re-consider
this commonly-neglected part of the sensor platform
Multiresolution Ray Tracing For Point-Based Geometry [QA445. N832 2007 f rb].
Tumpuan utama di dalam tesis ini adalah kajian tentang integrasi teknik berbilang peleraian dengan penyurihan sinar di dalam menjanakan imej objek objek 3D berasas titik.
The primary concern in this thesis is with the incorporation of multiresolutionbased optimization into ray tracing algorithms specially tailored for point-based geometry
Visuelle Analyse großer Partikeldaten
Partikelsimulationen sind eine bewährte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der Kraftstoffzerstäubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich über die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher Datensätze sowie der zugrundeliegenden
Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulären Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein reguläres eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. Darüber hinaus führt diese Konversion meist zu einem Verlust der Präzision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten
Fast Normal Approximation of Point Clouds in Screen Space
Displaying large point clouds of mainly planar point distributions yet comes with large restrictions regarding
the surface normal and surface reconstruction. Point data needs to be clustered or traversed to extract a local
neighborhood which is necessary to retrieve surface information. We propose using the rendering pipeline to
circumvent a pre-computation of the neighborhood in world space to perform a fast approximation of the surface
in screen space. We present and compare three different methods for surface reconstruction within a post-process.
These methods range from simple approximations to the definition of a tensor surface. All these methods are
designed to run at interactive frame-rates. We also present a correction method to increase reconstruction quality,
while preserving interactive frame-rates. Our results indicate, that the on-the-fly computation of surface normals
is not a limiting factor on modern GPUs. As the surface information is generated during the post-process, only the
target display size is the limiting factor. The performance is independent of the point cloud’s size
- …