581 research outputs found
Interactive Visualization of the Largest Radioastronomy Cubes
3D visualization is an important data analysis and knowledge discovery tool,
however, interactive visualization of large 3D astronomical datasets poses a
challenge for many existing data visualization packages. We present a solution
to interactively visualize larger-than-memory 3D astronomical data cubes by
utilizing a heterogeneous cluster of CPUs and GPUs. The system partitions the
data volume into smaller sub-volumes that are distributed over the rendering
workstations. A GPU-based ray casting volume rendering is performed to generate
images for each sub-volume, which are composited to generate the whole volume
output, and returned to the user. Datasets including the HI Parkes All Sky
Survey (HIPASS - 12 GB) southern sky and the Galactic All Sky Survey (GASS - 26
GB) data cubes were used to demonstrate our framework's performance. The
framework can render the GASS data cube with a maximum render time < 0.3 second
with 1024 x 1024 pixels output resolution using 3 rendering workstations and 8
GPUs. Our framework will scale to visualize larger datasets, even of Terabyte
order, if proper hardware infrastructure is available.Comment: 15 pages, 12 figures, Accepted New Astronomy July 201
MFA-DVR: Direct Volume Rendering of MFA Models
3D volume rendering is widely used to reveal insightful intrinsic patterns of
volumetric datasets across many domains. However, the complex structures and
varying scales of volumetric data can make efficiently generating high-quality
volume rendering results a challenging task. Multivariate functional
approximation (MFA) is a new data model that addresses some of the critical
challenges: high-order evaluation of both value and derivative anywhere in the
spatial domain, compact representation for large-scale volumetric data, and
uniform representation of both structured and unstructured data. In this paper,
we present MFA-DVR, the first direct volume rendering pipeline utilizing the
MFA model, for both structured and unstructured volumetric datasets. We
demonstrate improved rendering quality using MFA-DVR on both synthetic and real
datasets through a comparative study. We show that MFA-DVR not only generates
more faithful volume rendering than using local filters but also performs
faster on high-order interpolations on structured and unstructured datasets.
MFA-DVR is implemented in the existing volume rendering pipeline of the
Visualization Toolkit (VTK) to be accessible by the scientific visualization
community
Interactive isosurface ray tracing of time-varying tetrahedral volumes
Journal ArticleAbstract- We describe a system for interactively rendering isosurfaces of tetrahedral finite-element scalar fields using coherent ray tracing techniques on the CPU. By employing state-of-the art methods in polygonal ray tracing, namely aggressive packet/frustum traversal of a bounding volume hierarchy, we can accomodate large and time-varying unstructured data. In conjunction with this efficiency structure, we introduce a novel technique for intersecting ray packets with tetrahedral primitives. Ray tracing is flexible, allowing for dynamic changes in isovalue and time step, visualization of multiple isosurfaces, shadows, and depth-peeling transparency effects. The resulting system offers the intuitive simplicity of isosurfacing, guaranteed-correct visual results, and ultimately a scalable, dynamic and consistently interactive solution for visualizing unstructured volumes
Direct volume rendering of unstructured tetrahedral meshes using CUDA and OpenMP
Cataloged from PDF version of article.Direct volume visualization is an important method in many areas, including computational fluid dynamics and medicine. Achieving interactive rates for direct volume rendering of large unstructured volumetric grids is a challenging problem, but parallelizing direct volume rendering algorithms can help achieve this goal. Using Compute Unified Device Architecture (CUDA), we propose a GPU-based volume rendering algorithm that itself is based on a cell projection-based ray-casting algorithm designed for CPU implementations. We also propose a multicore parallelized version of the cell-projection algorithm using OpenMP. In both algorithms, we favor image quality over rendering speed. Our algorithm has a low memory footprint, allowing us to render large datasets. Our algorithm supports progressive rendering. We compared the GPU implementation with the serial and multicore implementations. We observed significant speed-ups that, together with progressive rendering, enables reaching interactive rates for large datasets
Stochastic Volume Rendering of Multi-Phase SPH Data
In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering
Volumetric real-time particle-based representation of large unstructured tetrahedral polygon meshes
In this paper we propose a particle-based volume rendering approach for unstructured, three-dimensional, tetrahedral polygon meshes. We stochastically generate millions of particles per second and project them on the screen in real-time. In contrast to previous rendering techniques of tetrahedral volume meshes, our method does not need a prior depth sorting of geometry. Instead, the rendered image is generated by choosing particles closest to the camera. Furthermore, we use spatial superimposing. Each pixel is constructed from multiple subpixels. This approach not only increases projection accuracy, but allows also a combination of subpixels into one superpixel that creates the well-known translucency effect of volume rendering. We show that our method is fast enough for the visualization of unstructured three-dimensional grids with hard real-time constraints and that it scales well for a high number of particles
A Distributed GPU-based Framework for real-time 3D Volume Rendering of Large Astronomical Data Cubes
We present a framework to interactively volume-render three-dimensional data
cubes using distributed ray-casting and volume bricking over a cluster of
workstations powered by one or more graphics processing units (GPUs) and a
multi-core CPU. The main design target for this framework is to provide an
in-core visualization solution able to provide three-dimensional interactive
views of terabyte-sized data cubes. We tested the presented framework using a
computing cluster comprising 64 nodes with a total of 128 GPUs. The framework
proved to be scalable to render a 204 GB data cube with an average of 30 frames
per second. Our performance analyses also compare between using NVIDIA Tesla
1060 and 2050 GPU architectures and the effect of increasing the visualization
output resolution on the rendering performance. Although our initial focus, and
the examples presented in this work, is volume rendering of spectral data cubes
from radio astronomy, we contend that our approach has applicability to other
disciplines where close to real-time volume rendering of terabyte-order 3D data
sets is a requirement.Comment: 13 Pages, 7 figures, has been accepted for publication in
Publications of the Astronomical Society of Australi
Visuelle Analyse großer Partikeldaten
Partikelsimulationen sind eine bewährte und weit verbreitete numerische Methode in der Forschung und Technik. Beispielsweise werden Partikelsimulationen zur Erforschung der Kraftstoffzerstäubung in Flugzeugturbinen eingesetzt. Auch die Entstehung des Universums wird durch die Simulation von dunkler Materiepartikeln untersucht. Die hierbei produzierten Datenmengen sind immens. So enthalten aktuelle Simulationen Billionen von Partikeln, die sich über die Zeit bewegen und miteinander interagieren. Die Visualisierung bietet ein großes Potenzial zur Exploration, Validation und Analyse wissenschaftlicher Datensätze sowie der zugrundeliegenden
Modelle. Allerdings liegt der Fokus meist auf strukturierten Daten mit einer regulären Topologie. Im Gegensatz hierzu bewegen sich Partikel frei durch Raum und Zeit. Diese Betrachtungsweise ist aus der Physik als das lagrange Bezugssystem bekannt. Zwar können Partikel aus dem lagrangen in ein reguläres eulersches Bezugssystem, wie beispielsweise in ein uniformes Gitter, konvertiert werden. Dies ist bei einer großen Menge an Partikeln jedoch mit einem erheblichen Aufwand verbunden. Darüber hinaus führt diese Konversion meist zu einem Verlust der Präzision bei gleichzeitig erhöhtem Speicherverbrauch. Im Rahmen dieser Dissertation werde ich neue Visualisierungstechniken erforschen, welche speziell auf der lagrangen Sichtweise basieren. Diese ermöglichen eine effiziente und effektive visuelle Analyse großer Partikeldaten
- …