257 research outputs found

    Rendering techniques for multimodal data

    Get PDF
    Many different direct volume rendering methods have been developed to visualize 3D scalar fields on uniform rectilinear grids. However, little work has been done on rendering simultaneously various properties of the same 3D region measured with different registration devices or at different instants of time. The demand for this type of visualization is rapidly increasing in scientific applications such as medicine in which the visual integration of multiple modalities allows a better comprehension of the anatomy and a perception of its relationships with activity. This paper presents different strategies of Direct Multimodal Volume Rendering (DMVR). It is restricted to voxel models with a known 3D rigid alignment transformation. The paper evaluates at which steps of the render-ing pipeline must the data fusion be realized in order to accomplish the desired visual integration and to provide fast re-renders when some fusion parameters are modified. In addition, it analyzes how existing monomodal visualization al-gorithms can be extended to multiple datasets and it compares their efficiency and their computational cost.Postprint (published version

    Importance driven environment map sampling

    Get PDF
    In this paper we present an automatic and efficient method for supporting Image Based Lighting (IBL) for bidirectional methods which improves both the sampling of the environment, and the detection and sampling of important regions of the scene, such as windows and doors. These often have a small area proportional to that of the entire scene, so paths which pass through them are generated with a low probability. The method proposed in this paper improves this by taking into account view importance, and modifies the lighting distribution to use light transport information. This also automatically constructs a sampling distribution in locations which are relevant to the camera position, thereby improving sampling. Results are presented when our method is applied to bidirectional rendering techniques, in particular we show results for Bidirectional Path Tracing, Metropolis Light Transport and Progressive Photon Mapping. Efficiency results demonstrate speed up of orders of magnitude (depending on the rendering method used), when compared to other methods

    Design of a multimodal rendering system

    Get PDF
    This paper addresses the rendering of aligned regular multimodal datasets. It presents a general framework of multimodal data fusion that includes several data merging methods. We also analyze the requirements of a rendering system able to provide these different fusion methods. On the basis of these requirements, we propose a novel design for a multimodal rendering system. The design has been implemented and proved showing to be efficient and flexible.Postprint (published version

    Hardware and software improvements of volume splatting

    Get PDF
    This paper proposes different hardware-based acceleration of the three classical splatting strategies: emph{composite-every-sample}, emph{object-space sheet-buffer} and emph{image-space sheet-buffer}.Preprin

    Stream programming framework for global ilumination techniques using a GPU

    Get PDF
    Los procesadores de streams están comenzando a ser una alternativa accesible para implementar técnicas de rendering asistidas por hardware que habitualmente estaban relegadas al uso offline. Nosotros elaboramos un marco de trabajo para procesamiento de streams basado en los conceptos del modelo de Stream Programming, seleccionamos el algoritmo de Photon Mapping y una GPU (Graphics Processing Unit) Nvidia para una implementación de un caso de prueba. Definimos un conjunto de clases en C++ para encapsular los componentes (kernels y streams) de este nuevo paradigma, usando OpenGL y el lenguaje Cg. Nuestra aplicación combina el método de Photon Mapping y una estructura de aceleración BVH (Bounding Volumes Hierarchy) en un pipeline de renderizado basado casi completamente en la GPU. Finalmente, evaluamos su desempeño usando un modelo de caja de Cornell.Stream processors are becoming an affordable alternative to implement hardware assisted rendering techniques which were usually relegated to offline usage. We built a stream processing framework based on the Stream Programming Model concepts, selected the Photon Mapping algorithm and an NVIDIA GPU (Graphics Processing Unit) as a test case implementation of a Global Illumination technique. We defined a set of C++ classes to encapsulate the components (kernels and streams) of this new paradigm, using OpenGL and Cg language. Our application combines the Photon Splatting method and the BVH (Bounding Volumes Hierarchy) acceleration structure into a rendering pipeline relying almost entirely on the GPU. Finally, we evaluated its performance using a Cornell Box model.V Workshop de Computación Gráfica, Imágenes Y VisualizaciónRed de Universidades con Carreras en Informática (RedUNCI

    Performance and quality analysis of convolution-based volume illumination

    Get PDF
    Convolution-based techniques for volume rendering are among the fastest in the on-the-fly volumetric illumination category. Such methods, however, are still considerably slower than conventional local illumination techniques. In this paper we describe how to adapt two commonly used strategies for reducing aliasing artifacts, namely pre-integration and supersampling, to such techniques. These strategies can help reduce the sampling rate of the lighting information (thus the number of convolutions), bringing considerable performance benefits. We present a comparative analysis of their effectiveness in offering performance improvements. We also analyze the (negligible) differences they introduce when comparing their output to the reference method. These strategies can be highly beneficial in setups where direct volume rendering of continuously streaming data is desired and continuous recomputation of full lighting information is too expensive, or where memory constraints make it preferable not to keep additional precomputed volumetric data in memory. In such situations these strategies make single pass, convolution-based volumetric illumination models viable for a broader range of applications, and this paper provides practical guidelines for using and tuning such strategies to specific use cases
    corecore