10 research outputs found

    Calculating intersections of surfaces in screen space

    Get PDF
    When surfaces intersect, one may desire to highlight the intersection curve in order to make the shape of the penetrating surfaces more visible. Highlighting the intersection is especially helpful when the surfaces become transparent, because transparency makes the intersections less evident. A technique for locating intersections in screen space using only the information locally available to a pixel is discussed. The technique is designed to exploit parallelism at the pixel level and was implemented on the Pixel-Planes 5 graphics supercomputer

    GPU-Based Tiled Ray Casting Using Depth Peeling

    Full text link

    HA-Buffer: Coherent Hashing for single-pass A-buffer

    Get PDF
    Identifying all the surfaces projecting into a pixel has several important applications in Computer Graphics, such as transparency and CSG. These applications further require ordering, in each pixel, the surfaces by their distance to the viewer. In real-time rendering engines, this is often achieved by recording sorted lists of the fragments produced by the rasterization pipeline. The major challenge is that the number of fragments is not known in advance. This results in computational and memory overheads due to the necessary dynamic nature of the data-structure. Similarly, many fragments which are not useful for the final image--due to opacity accumulation for instance--have to be stored and sorted nonetheless, negatively impacting performance. This paper proposes a novel approach which records and simultaneously sorts all fragments in a single geometry pass. The storage overhead per fragment is typically lower than 8 bits per record, and no pointers are involved. Since fragments are progressively sorted in memory, it is possible to assess during rendering whether a new fragment is useful. Our approach combines advantages of previous approaches at similar levels of performance, and is implemented in a single fragment shader of 24 lines of GLSL.Plusieurs applications en synthèse d'image nécessitent le calcul de l'ensemble des surfaces visibles au travers d'un pixel. Citons le dessin correct de surfaces transparentes ainsi que le dessin de mod'eles CSG. Ces applications nécessite également de trier les surfaces, pour chaque pixel, selon leur distance au point de vue. Pour les applications en temps-réel, ce sont les fragments produits par l'étape de rasterisation qui sont triés et stockés en mémoire vidéo. Le nombre de ces fragments n'étant pas connu à l'avance, il est nécessaire d'utiliser de coûteuses techniques de gestion de la mémoire. De plus, tous les fragments sont traités même si une fraction non négligeable d'entre eux peut être inutile au dessin de l'image finale (grâce, par exemple, à l'accumulation de l'opacité de plusieurs surfaces combinées). Nous proposons une technique simple pour trier les fragments d'un même pixel au moment de leur rasterisation, sans utiliser de liste chainée (et donc de pointeur). Puisque la liste des fragments pour un pixel est toujours triée, il est possible de déterminer, au moment de sa rasterisation, si un fragment contribuera ou pas à l'image finale, et de le rejetter le cas échéant. La technique combine les avantages de plusieurs approches existantes pour un niveau de performance similaire. Elle a l'unique avantage d'étre très simple à coder : 24 lignes de GLSL

    The Simulation System for Propagation of Fire and Smoke

    Get PDF
    This work presents a solution for a real-time fire suppression control system. It also serves as a support tool that allows creation of virtual ship models and testing them against a range of representative fire scenarios. Model testing includes generating predictions faster than real time, using the simulation network model developed by Hughes Associates, Inc., their visualization, as well as interactive modification of the model settings through the user interface. In the example, the ship geometry represents ex-USS Shadwell, test area 688, imitating a submarine. Applying the designed visualization techniques to the example model revealed the ability of the system to process, store and render data much faster than the real time (in average, 40 times faster)

    Visualização de conjuntos de dados grandes formados por esferas

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2012Este trabalho trata da visualização grá?ca de conjuntos de dados cujas principais características é ser formado por esferas e possuir um grande volume de informações. As principais di?culdades em se trabalhar com dados numerosos é, além da performance, exibir a informação de forma clara. Para solucionar estes problemas foi criado um renderizador básico baseado na técnica de sprites com otimizações em nível de objeto e de imagem. Posteriormente foram analisados algoritmos de renderização com suporte a transparência, onde o depth peeling se mostrou adequado. O algoritmo foi adaptado para permitir a geração de uma imagem incompleta, tendo em vista a performance, porém foi feita uma quanti?cação do erro para limites aceitáveis poderem ser estabelecidos. Finalmente a modalidade de renderização utilizando a técnica de ambient occlusion foi implementada para melhor compreensão espacial dos dados, usando otimização de deferred shading. A performance obtida foi su?ciente para a visualização interativa dos conjuntos de dados.Abstract : This work deals with the graphical visualization of datasets which the main feature is being formed by spheres e have a great volume of information. The main dificulties when working with numerous data is, in addition to performance, show the information in a clear way. In order to solve these problems a basic renderer was created based on sprites techniques with optimizations at object and image level. Afterwards rendering algorithms with translucency support were analyzed, where depth peeling shown itself adequate. The algorithm was adapted to allow the generation of an incomplete image, in view of performance, but a quantification of the error was created so acceptable thresholds can be established. Finally the rendering mode using the ambient occlusion technique was implemented for better spatial comprehension of the data, using deferred shading optimization. The performance was enough for a interactive visualization of the datasets

    Efficient automatic correction and segmentation based 3D visualization of magnetic resonance images

    Get PDF
    In the recent years, the demand for automated processing techniques for digital medical image volumes has increased substantially. Existing algorithms, however, still often require manual interaction, and newly developed automated techniques are often intended for a narrow segment of processing needs. The goal of this research was to develop algorithms suitable for fast and effective correction and advanced visualization of digital MR image volumes with minimal human operator interaction. This research has resulted in a number of techniques for automated processing of MR image volumes, including a novel MR inhomogeneity correction algorithm derivative surface fitting (dsf), automatic tissue detection algorithm (atd), and a new fast technique for interactive 3D visualization of segmented volumes called gravitational shading (gs). These newly developed algorithms provided a foundation for the automated MR processing pipeline incorporated into the UniViewer medical imaging software developed in our group and available to the public. This allowed the extensive testing and evaluation of the proposed techniques. Dsf was compared with two previously published methods on 17 digital image volumes. Dsf demonstrated faster correction speeds and uniform image quality improvement in this comparison. Dsf was the only algorithm that did not remove anatomic detail. Gs was compared with the previously published algorithm fsvr and produced rendering quality improvement while preserving real-time frame-rates. These results show that the automated pipeline design principles used in this dissertation provide necessary tools for development of a fast and effective system for the automated correction and visualization of digital MR image volumes

    Medical Volume Visualization Beyond Single Voxel Values

    Full text link

    Three--dimensional medical imaging: Algorithms and computer systems

    Get PDF
    This paper presents an introduction to the field of three-dimensional medical imaging It presents medical imaging terms and concepts, summarizes the basic operations performed in three-dimensional medical imaging, and describes sample algorithms for accomplishing these operations. The paper contains a synopsis of the architectures and algorithms used in eight machines to render three-dimensional medical images, with particular emphasis paid to their distinctive contributions. It compares the performance of the machines along several dimensions, including image resolution, elapsed time to form an image, imaging algorithms used in the machine, and the degree of parallelism used in the architecture. The paper concludes with general trends for future developments in this field and references on three-dimensional medical imaging

    Flexible occlusion rendering for improved views of three-dimensional medical images

    Get PDF
    The goal of this work is to enable more rapid and accurate diagnosis of pathology from three-dimensional (3D) medical images by augmenting standard volume rendering techniques to display otherwise-occluded features within the volume. When displaying such data sets with volume rendering, appropriate selection of the transfer function is critical for determining which features of the data will be displayed. In many cases, however, no transfer function is able to produce the most useful views for diagnosis of pathology. Flexible Occlusion Rendering (FOR) is an addition to standard ray cast volume rendering that modulates accumulated color and opacity along each ray upon detecting features indicating the separation between objects of the same intensity range. For contrast-enhanced MRI and CT data, these separation features are intensity peaks. To detect these peaks, a dual-threshold method is used to reduce sensitivity to noise. To further reduce noise and enable control over the spatial scale of the features detected, a smoothed version of the original data set is used for feature detection, while rendering the original data at high resolution. Separating the occlusion feature detection from the volume rendering transfer function enables robust occlusion determination and seamless transition from occluded views to non-occluded views of surfaces during virtual fly-throughs. FOR has been applied to virtual arthroscopy of joints from MRI data. For example, survey views of entire shoulder socket surfaces have been rendered to enable rapid evaluation by automatically removing the occluding material of the humeral head. Such views are not possible with standard volume rendering. FOR has also been successfully applied to virtual ureteroscopy of the renal collecting system from CT data, and knee fracture visualization from CT data

    Real-Time deep image rendering and order independent transparency

    Get PDF
    In computer graphics some operations can be performed in either object space or image space. Image space computation can be advantageous, especially with the high parallelism of GPUs, improving speed, accuracy and ease of implementation. For many image space techniques the information contained in regular 2D images is limiting. Recent graphics hardware features, namely atomic operations and dynamic memory location writes, now make it possible to capture and store all per-pixel fragment data from the rasterizer in a single pass in what we call a deep image. A deep image provides a state where all fragments are available and gives a more complete image based geometry representation, providing new possibilities in image based rendering techniques. This thesis investigates deep images and their growing use in real-time image space applications. A focus is new techniques for improving fundamental operation performance, including construction, storage, fast fragment sorting and sampling. A core and driving application is order-independent transparency (OIT). A number of deep image sorting improvements are presented, through which an order of magnitude performance increase is achieved, significantly advancing the ability to perform transparency rendering in real time. In the broader context of image based rendering we look at deep images as a discretized 3D geometry representation and discuss sampling techniques for raycasting and antialiasing with an implicit fragment connectivity approach. Using these ideas a more computationally complex application is investigated — image based depth of field (DoF). Deep images are used to provide partial occlusion, and in particular a form of deep image mipmapping allows a fast approximate defocus blur of up to full screen size
    corecore