113 research outputs found
VolumeEVM: A new surface/volume integrated model
Volume visualization is a very active research area in the field of scien-tific
visualization. The Extreme Vertices Model (EVM) has proven to be
a complete intermediate model to visualize and manipulate volume data
using a surface rendering approach. However, the ability to integrate the
advantages of surface rendering approach with the superiority in visual exploration
of the volume rendering would actually produce a very complete
visualization and edition system for volume data. Therefore, we decided
to define an enhanced EVM-based model which incorporates the volumetric
information required to achieved a nearly direct volume visualization
technique. Thus, VolumeEVM was designed maintaining the same EVM-based
data structure plus a sorted list of density values corresponding to
the EVM-based VoIs interior voxels. A function which relates interior
voxels of the EVM with the set of densities was mandatory to be defined.
This report presents the definition of this new surface/volume integrated
model based on the well known EVM encoding and propose implementations
of the main software-based direct volume rendering techniques
through the proposed model.Postprint (published version
Interactive Visualization of the Largest Radioastronomy Cubes
3D visualization is an important data analysis and knowledge discovery tool,
however, interactive visualization of large 3D astronomical datasets poses a
challenge for many existing data visualization packages. We present a solution
to interactively visualize larger-than-memory 3D astronomical data cubes by
utilizing a heterogeneous cluster of CPUs and GPUs. The system partitions the
data volume into smaller sub-volumes that are distributed over the rendering
workstations. A GPU-based ray casting volume rendering is performed to generate
images for each sub-volume, which are composited to generate the whole volume
output, and returned to the user. Datasets including the HI Parkes All Sky
Survey (HIPASS - 12 GB) southern sky and the Galactic All Sky Survey (GASS - 26
GB) data cubes were used to demonstrate our framework's performance. The
framework can render the GASS data cube with a maximum render time < 0.3 second
with 1024 x 1024 pixels output resolution using 3 rendering workstations and 8
GPUs. Our framework will scale to visualize larger datasets, even of Terabyte
order, if proper hardware infrastructure is available.Comment: 15 pages, 12 figures, Accepted New Astronomy July 201
Time-varying volume visualization
Volume rendering is a very active research field in Computer Graphics because of its wide range of applications in various sciences, from medicine to flow mechanics. In this report, we survey a state-of-the-art on time-varying volume rendering. We state several basic concepts and then we establish several criteria to classify the studied works: IVR versus DVR, 4D versus 3D+time, compression techniques, involved architectures, use of parallelism and image-space versus object-space coherence. We also address other related problems as transfer functions and 2D cross-sections computation of time-varying volume data. All the papers reviewed are classified into several tables based on the mentioned classification and, finally, several conclusions are presented.Preprin
CAVASS: A Computer-Assisted Visualization and Analysis Software System
The Medical Image Processing Group at the University of Pennsylvania has been developing (and distributing with source code) medical image analysis and visualization software systems for a long period of time. Our most recent system, 3DVIEWNIX, was first released in 1993. Since that time, a number of significant advancements have taken place with regard to computer platforms and operating systems, networking capability, the rise of parallel processing standards, and the development of open-source toolkits. The development of CAVASS by our group is the next generation of 3DVIEWNIX. CAVASS will be freely available and open source, and it is integrated with toolkits such as Insight Toolkit and Visualization Toolkit. CAVASS runs on Windows, Unix, Linux, and Mac but shares a single code base. Rather than requiring expensive multiprocessor systems, it seamlessly provides for parallel processing via inexpensive clusters of work stations for more time-consuming algorithms. Most importantly, CAVASS is directed at the visualization, processing, and analysis of 3-dimensional and higher-dimensional medical imagery, so support for digital imaging and communication in medicine data and the efficient implementation of algorithms is given paramount importance
Multi-dimensional volume rendering for PC- based medical simulation
Ph.DDOCTOR OF PHILOSOPH
Interactive High Performance Volume Rendering
This thesis is about Direct Volume Rendering on high performance computing systems. As direct rendering methods do not create a lower-dimensional geometric representation, the whole scientific dataset must be kept in memory. Thus, this family of algorithms has a tremendous resource demand. Direct Volume Rendering algorithms in general are well suited to be implemented for dedicated graphics
hardware. Nevertheless, high performance computing systems often do not provide resources for hardware accelerated rendering, so that the visualization algorithm must be implemented for the available general-purpose hardware.
Ever growing datasets that imply copying large amounts of data from the compute system to the workstation of the scientist, and the need to review intermediate simulation results, make porting Direct Volume Rendering to high performance computing systems highly relevant. The contribution of this thesis is twofold.
As part of the first contribution, after devising a software architecture for general implementations of Direct Volume Rendering on highly parallel platforms, parallelization issues and implementation details for various modern architectures are discussed. The contribution results in a highly parallel implementation that tackles several platforms.
The second contribution is concerned with the display phase of the “Distributed Volume Rendering Pipeline”. Rendering on a high performance computing system typically implies displaying the rendered result at a remote location. This thesis presents a remote rendering technique that is capable of hiding latency and can thus be used in an interactive environment
A Distributed GPU-based Framework for real-time 3D Volume Rendering of Large Astronomical Data Cubes
We present a framework to interactively volume-render three-dimensional data
cubes using distributed ray-casting and volume bricking over a cluster of
workstations powered by one or more graphics processing units (GPUs) and a
multi-core CPU. The main design target for this framework is to provide an
in-core visualization solution able to provide three-dimensional interactive
views of terabyte-sized data cubes. We tested the presented framework using a
computing cluster comprising 64 nodes with a total of 128 GPUs. The framework
proved to be scalable to render a 204 GB data cube with an average of 30 frames
per second. Our performance analyses also compare between using NVIDIA Tesla
1060 and 2050 GPU architectures and the effect of increasing the visualization
output resolution on the rendering performance. Although our initial focus, and
the examples presented in this work, is volume rendering of spectral data cubes
from radio astronomy, we contend that our approach has applicability to other
disciplines where close to real-time volume rendering of terabyte-order 3D data
sets is a requirement.Comment: 13 Pages, 7 figures, has been accepted for publication in
Publications of the Astronomical Society of Australi
Three architectures for volume rendering
Volume rendering is a key technique in scientific visualization that lends itself to significant exploitable parallelism. The high computational demands of real-time volume rendering and continued technological advances in the area of VLSI give impetus to the development of special-purpose volume rendering architectures. This paper presents and characterizes three recently developed volume rendering engines which are based on the ray-casting method. A taxonomy of the algorithmic variants of ray-casting and details of each ray-casting architecture are discussed. The paper then compares the machine features and provides an outlook on future developments in the area of volume rendering hardware
Large Model Visualization : Techniques and Applications
The size of datasets in scientific computing is rapidly
increasing. This increase is caused by a boost of processing power in
the past years, which in turn was invested in an increase of the
accuracy and the size of the models. A similar trend enabled a
significant improvement of medical scanners; more than 1000 slices of
a resolution of 512x512 can be generated by modern scanners in daily
practice. Even in computer-aided engineering typical models eas-ily
contain several million polygons. Unfortunately, the data complexity
is growing faster than the rendering performance of modern computer
systems. This is not only due to the slower growing graphics
performance of the graphics subsystems, but in particular because of
the significantly slower growing memory bandwidth for the transfer of
the geometry and image data from the main memory to the graphics
accelerator.
Large model visualization addresses this growing divide between data
complexity and rendering performance. Most methods focus on the
reduction of the geometric or pixel complexity, and hence also the
memory bandwidth requirements are reduced.
In this dissertation, we discuss new approaches from three different
research areas. All approaches target at the reduction of the
processing complexity to achieve an interactive visualization of large
datasets. In the second part, we introduce applications of the
presented ap-proaches. Specifically, we introduce the new VIVENDI
system for the interactive virtual endoscopy and other applications
from mechanical engineering, scientific computing, and architecture.The size of datasets in scientific computing is rapidly
increasing. This increase is caused by a boost of processing power in
the past years, which in turn was invested in an increase of the
accuracy and the size of the models. A similar trend enabled a
significant improvement of medical scanners; more than 1000 slices of
a resolution of 512x512 can be generated by modern scanners in daily
practice. Even in computer-aided engineering typical models eas-ily
contain several million polygons. Unfortunately, the data complexity
is growing faster than the rendering performance of modern computer
systems. This is not only due to the slower growing graphics
performance of the graphics subsystems, but in particular because of
the significantly slower growing memory bandwidth for the transfer of
the geometry and image data from the main memory to the graphics
accelerator.
Large model visualization addresses this growing divide between data
complexity and rendering performance. Most methods focus on the
reduction of the geometric or pixel complexity, and hence also the
memory bandwidth requirements are reduced.
In this dissertation, we discuss new approaches from three different
research areas. All approaches target at the reduction of the
processing complexity to achieve an interactive visualization of large
datasets. In the second part, we introduce applications of the
presented ap-proaches. Specifically, we introduce the new VIVENDI
system for the interactive virtual endoscopy and other applications
from mechanical engineering, scientific computing, and architecture
- …