1,117 research outputs found

    Evaluation of depth of field for depth perception in DVR

    Get PDF
    pre-printIn this paper we present a user study on the use of Depth of Field for depth perception in Direct Volume Rendering. Direct Volume Rendering with Phong shading and perspective projection is used as the baseline. Depth of Field is then added to see its impact on the correct perception of ordinal depth. Accuracy and response time are used as the metrics to evaluate the usefulness of Depth of Field. The onsite user study has two parts: static and dynamic. Eye tracking is used to monitor the gaze of the subjects. From our results we see that though Depth of Field does not act as a proper depth cue in all conditions, it can be used to reinforce the perception of which feature is in front of the other. The best results (high accuracy & fast response time) for correct perception of ordinal depth occurs when the front feature (out of the two features users were to choose from) is in focus and perspective projection is used

    Evaluation of Depth of Field for depth perception in DVR

    Full text link

    Validating Stereoscopic Volume Rendering

    Get PDF
    The evaluation of stereoscopic displays for surface-based renderings is well established in terms of accurate depth perception and tasks that require an understanding of the spatial layout of the scene. In comparison direct volume rendering (DVR) that typically produces images with a high number of low opacity, overlapping features is only beginning to be critically studied on stereoscopic displays. The properties of the specific images and the choice of parameters for DVR algorithms make assessing the effectiveness of stereoscopic displays for DVR particularly challenging and as a result existing literature is sparse with inconclusive results. In this thesis stereoscopic volume rendering is analysed for tasks that require depth perception including: stereo-acuity tasks, spatial search tasks and observer preference ratings. The evaluations focus on aspects of the DVR rendering pipeline and assess how the parameters of volume resolution, reconstruction filter and transfer function may alter task performance and the perceived quality of the produced images. The results of the evaluations suggest that the transfer function and choice of recon- struction filter can have an effect on the performance on tasks with stereoscopic displays when all other parameters are kept consistent. Further, these were found to affect the sensitivity and bias response of the participants. The studies also show that properties of the reconstruction filters such as post-aliasing and smoothing do not correlate well with either task performance or quality ratings. Included in the contributions are guidelines and recommendations on the choice of pa- rameters for increased task performance and quality scores as well as image based methods of analysing stereoscopic DVR images

    Doctor of Philosophy in Computing

    Get PDF
    dissertationThe aim of direct volume rendering is to facilitate exploration and understanding of three-dimensional scalar fields referred to as volume datasets. Improving understanding is done by improving depth perception, whereas facilitating exploration is done by speeding up volume rendering. In this dissertation, improving both depth perception and rendering speed is considered. The impact of depth of field (DoF) on depth perception in direct volume rendering is evaluated by conducting a user study in which the test subjects had to choose which of two features, located at different depths, appeared to be in front in a volume-rendered image. Whereas DoF was expected to improve perception in all cases, the user study revealed that if used on the back feature, DoF reduced depth perception, whereas it produced a marked improvement when used on the front feature. We then worked on improving the speed of volume rendering on distributed memory machines. Distributed volume rendering has three stages: loading, rendering, and compositing. In this dissertation, the focus is on image compositing, more specifically, trying to optimize communication in image compositing algorithms. For that, we have developed the Task Overlapped Direct Send Tree image compositing algorithm, which works on both CPU- and GPU-accelerated supercomputers, which focuses on communication avoidance and overlapping communication with computation; the Dynamically Scheduled Region-Based image compositing algorithm that uses spatial and temporal awareness to efficiently schedule communication among compositing nodes, and a rendering and compositing pipeline that allows both image compositing and rendering to be done on GPUs of GPU-accelerated supercomputers. We tested these on CPU- and GPU-accelerated supercomputers and explain how these improvements allow us to obtain better performance than image compositing algorithms that focus on load-balancing and algorithms that have no spatial and temporal awareness of the rendering and compositing stages

    Enhanced perception in volume visualization

    Get PDF
    Due to the nature of scientic data sets, the generation of convenient visualizations may be a difficult task, but crucial to correctly convey the relevant information of the data. When working with complex volume models, such as the anatomical ones, it is important to provide accurate representations, since a misinterpretation can lead to serious mistakes while diagnosing a disease or planning surgery. In these cases, enhancing the perception of the features of interest usually helps to properly understand the data. Throughout years, researchers have focused on different methods to improve the visualization of volume data sets. For instance, the definition of good transfer functions is a key issue in Volume Visualization, since transfer functions determine how materials are classified. Other approaches are based on simulating realistic illumination models to enhance the spatial perception, or using illustrative effects to provide the level of abstraction needed to correctly interpret the data. This thesis contributes with new approaches to enhance the visual and spatial perception in Volume Visualization. Thanks to the new computing capabilities of modern graphics hardware, the proposed algorithms are capable of modifying the illumination model and simulating illustrative motifs in real time. In order to enhance local details, which are useful to better perceive the shape and the surfaces of the volume, our first contribution is an algorithm that employs a common sharpening operator to modify the lighting applied. As a result, the overall contrast of the visualization is enhanced by brightening the salient features and darkening the deeper regions of the volume model. The enhancement of depth perception in Direct Volume Rendering is also covered in the thesis. To do this, we propose two algorithms to simulate ambient occlusion: a screen-space technique based on using depth information to estimate the amount of light occluded, and a view-independent method that uses the density values of the data set to estimate the occlusion. Additionally, depth perception is also enhanced by adding halos around the structures of interest. Maximum Intensity Projection images provide a good understanding of the high intensity features of the data, but lack any contextual information. In order to enhance the depth perception in such a case, we present a novel technique based on changing how intensity is accumulated. Furthermore, the perception of the spatial arrangement of the displayed structures is also enhanced by adding certain colour cues. The last contribution is a new manipulation tool designed for adding contextual information when cutting the volume. Based on traditional illustrative effects, this method allows the user to directly extrude structures from the cross-section of the cut. As a result, the clipped structures are displayed at different heights, preserving the information needed to correctly perceive them.Debido a la naturaleza de los datos cientรญficos, visualizarlos correctamente puede ser una tarea complicada, pero crucial para interpretarlos de forma adecuada. Cuando se trabaja con modelos de volumen complejos, como es el caso de los modelos anatรณmicos, es importante generar imรกgenes precisas, ya que una mala interpretaciรณn de las mismas puede producir errores graves en el diagnรณstico de enfermedades o en la planificaciรณn de operaciones quirรบrgicas. En estos casos, mejorar la percepciรณn de las zonas de interรฉs, facilita la comprensiรณn de la informaciรณn inherente a los datos. Durante dรฉcadas, los investigadores se han centrado en el desarrollo de tรฉcnicas para mejorar la visualizaciรณn de datos volumรฉtricos. Por ejemplo, los mรฉtodos que permiten definir buenas funciones de transferencia son clave, ya que รฉstas determinan cรณmo se clasifican los materiales. Otros ejemplos son las tรฉcnicas que simulan modelos de iluminaciรณn realista, que permiten percibir mejor la distribuciรณn espacial de los elementos del volumen, o bien los que imitan efectos ilustrativos, que proporcionan el nivel de abstracciรณn necesario para interpretar correctamente los datos. El trabajo presentado en esta tesis se centra en mejorar la percepciรณn de los elementos del volumen, ya sea modificando el modelo de iluminaciรณn aplicado en la visualizaciรณn, o simulando efectos ilustrativos. Aprovechando la capacidad de cรกlculo de los nuevos procesadores grรกficos, se describen un conjunto de algoritmos que permiten obtener los resultados en tiempo real. Para mejorar la percepciรณn de detalles locales, proponemos modificar el modelo de iluminaciรณn utilizando una conocida herramienta de procesado de imรกgenes (unsharp masking). Iluminando aquellos detalles que sobresalen de las superficies y oscureciendo las zonas profundas, se mejora el contraste local de la imagen, con lo que se consigue realzar los detalles de superficie. Tambiรฉn se presentan diferentes tรฉcnicas para mejorar la percepciรณn de la profundidad en Direct Volume Rendering. Concretamente, se propone modificar la iluminaciรณn teniendo en cuenta la oclusiรณn ambiente de dos maneras diferentes: la primera utiliza los valores de profundidad en espacio imagen para calcular el factor de oclusiรณn del entorno de cada pixel, mientras que la segunda utiliza los valores de densidad del volumen para aproximar dicha oclusiรณn en cada vรณxel. Ademรกs de estas dos tรฉcnicas, tambiรฉn se propone mejorar la percepciรณn espacial y de la profundidad de ciertas estructuras mediante la generaciรณn de halos. La tรฉcnica conocida como Maximum Intensity Projection (MIP) permite visualizar los elementos de mayor intensidad del volumen, pero no aporta ningรบn tipo de informaciรณn contextual. Para mejorar la percepciรณn de la profundidad, proponemos una nueva tรฉcnica basada en cambiar la forma en la que se acumula la intensidad en MIP. Tambiรฉn se describe un esquema de color para mejorar la percepciรณn espacial de los elementos visualizados. La รบltima contribuciรณn de la tesis es una herramienta de manipulaciรณn directa de los datos, que permite preservar la informaciรณn contextual cuando se realizan cortes en el modelo de volumen. Basada en tรฉcnicas ilustrativas tradicionales, esta tรฉcnica permite al usuario estirar las estructuras visibles en las secciones de los cortes. Como resultado, las estructuras de interรฉs se visualizan a diferentes alturas sobre la secciรณn, lo que permite al observador percibirlas correctamente

    An evaluation of reconstruction filters for a path-searching task in 3D

    Get PDF
    The choice of reconstruction filter used to interpolate between sample points when generating images from volumetric data sets can have an impact on image quality. There are a range of reconstruction filters as well as methods to determine the quality of these filters. While it is well documented that stereoscopy can improve the performance of spatial search tasks, it is not clear how artifacts introduced by the choice of reconstruction filter will impact the performance of these tasks. In this study we report the results of a path-tracing experiment where we assess the effectiveness of stereoscopy and three reconstruction filters in terms of accuracy and response time. Our results suggest that the reconstruction filter can have a significant effect on path-tracing tasks and that stereoscopy can significantly improve accuracy results whilst slightly increasing response time

    ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ์ ์ง„์  ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง์„ ์‚ฌ์šฉํ•œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ๋ Œ๋”๋ง

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2021. 2. ์‹ ์˜๊ธธ.Direct volume rendering is a widely used technique for extracting information from 3D scalar fields acquired by measurement or numerical simulation. To visualize the structure inside the volume, the voxels scalar value is often represented by a translucent color. This translucency of direct volume rendering makes it difficult to perceive the depth between the nested structures. Various volume rendering techniques to improve depth perception are mainly based on illustrative rendering techniques, and physically based rendering techniques such as depth of field effects are difficult to apply due to long computation time. With the development of immersive systems such as virtual and augmented reality and the growing interest in perceptually motivated medical visualization, it is necessary to implement depth of field in direct volume rendering. This study proposes a novel method for applying depth of field effects to volume ray casting to improve the depth perception. By performing ray casting using multiple rays per pixel, objects at a distance in focus are sharply rendered and objects at an out-of-focus distance are blurred. To achieve these effects, a thin lens camera model is used to simulate rays passing through different parts of the lens. And an effective lens sampling method is used to generate an aliasing-free image with a minimum number of lens samples that directly affect performance. The proposed method is implemented without preprocessing based on the GPU-based volume ray casting pipeline. Therefore, all acceleration techniques of volume ray casting can be applied without restrictions. We also propose multi-pass rendering using progressive lens sampling as an acceleration technique. More lens samples are progressively used for ray generation over multiple render passes. Each pixel has a different final render pass depending on the predicted maximum blurring size based on the circle of confusion. This technique makes it possible to apply a different number of lens samples for each pixel, depending on the degree of blurring of the depth of field effects over distance. This acceleration method reduces unnecessary lens sampling and increases the cache hit rate of the GPU, allowing us to generate the depth of field effects at interactive frame rates in direct volume rendering. In the experiments using various data, the proposed method generated realistic depth of field effects in real time. These results demonstrate that our method produces depth of field effects with similar quality to the offline image synthesis method and is up to 12 times faster than the existing depth of field method in direct volume rendering.์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง(direct volume rendering, DVR)์€ ์ธก์ • ๋˜๋Š” ์ˆ˜์น˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์œผ๋กœ ์–ป์€ 3์ฐจ์› ๊ณต๊ฐ„์˜ ์Šค์นผ๋ผ ํ•„๋“œ(3D scalar fields) ๋ฐ์ดํ„ฐ์—์„œ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•˜๋Š”๋ฐ ๋„๋ฆฌ ์‚ฌ์šฉ๋˜๋Š” ๊ธฐ์ˆ ์ด๋‹ค. ๋ณผ๋ฅจ ๋‚ด๋ถ€์˜ ๊ตฌ์กฐ๋ฅผ ๊ฐ€์‹œํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ณต์…€(voxel)์˜ ์Šค์นผ๋ผ ๊ฐ’์€ ์ข…์ข… ๋ฐ˜ํˆฌ๋ช…์˜ ์ƒ‰์ƒ์œผ๋กœ ํ‘œํ˜„๋œ๋‹ค. ์ด๋Ÿฌํ•œ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๋ฐ˜ํˆฌ๋ช…์„ฑ์€ ์ค‘์ฒฉ๋œ ๊ตฌ์กฐ ๊ฐ„ ๊นŠ์ด ์ธ์‹์„ ์–ด๋ ต๊ฒŒ ํ•œ๋‹ค. ๊นŠ์ด ์ธ์‹์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ๋ณผ๋ฅจ ๋ Œ๋”๋ง ๊ธฐ๋ฒ•๋“ค์€ ์ฃผ๋กœ ์‚ฝํ™”ํ’ ๋ Œ๋”๋ง(illustrative rendering)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋ฉฐ, ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„(depth of field, DoF) ํšจ๊ณผ์™€ ๊ฐ™์€ ๋ฌผ๋ฆฌ ๊ธฐ๋ฐ˜ ๋ Œ๋”๋ง(physically based rendering) ๊ธฐ๋ฒ•๋“ค์€ ๊ณ„์‚ฐ ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆฌ๊ธฐ ๋•Œ๋ฌธ์— ์ ์šฉ์ด ์–ด๋ ต๋‹ค. ๊ฐ€์ƒ ๋ฐ ์ฆ๊ฐ• ํ˜„์‹ค๊ณผ ๊ฐ™์€ ๋ชฐ์ž…ํ˜• ์‹œ์Šคํ…œ์˜ ๋ฐœ์ „๊ณผ ์ธ๊ฐ„์˜ ์ง€๊ฐ์— ๊ธฐ๋ฐ˜ํ•œ ์˜๋ฃŒ์˜์ƒ ์‹œ๊ฐํ™”์— ๋Œ€ํ•œ ๊ด€์‹ฌ์ด ์ฆ๊ฐ€ํ•จ์— ๋”ฐ๋ผ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„๋ฅผ ๊ตฌํ˜„ํ•  ํ•„์š”๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๊นŠ์ด ์ธ์‹์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•์— ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ ์šฉํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํ”ฝ์…€ ๋‹น ์—ฌ๋Ÿฌ ๊ฐœ์˜ ๊ด‘์„ ์„ ์‚ฌ์šฉํ•œ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•(ray casting)์„ ์ˆ˜ํ–‰ํ•˜์—ฌ ์ดˆ์ ์ด ๋งž๋Š” ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋Š” ์„ ๋ช…ํ•˜๊ฒŒ ํ‘œํ˜„๋˜๊ณ  ์ดˆ์ ์ด ๋งž์ง€ ์•Š๋Š” ๊ฑฐ๋ฆฌ์— ์žˆ๋Š” ๋ฌผ์ฒด๋Š” ํ๋ฆฌ๊ฒŒ ํ‘œํ˜„๋œ๋‹ค. ์ด๋Ÿฌํ•œ ํšจ๊ณผ๋ฅผ ์–ป๊ธฐ ์œ„ํ•˜์—ฌ ๋ Œ์ฆˆ์˜ ์„œ๋กœ ๋‹ค๋ฅธ ๋ถ€๋ถ„์„ ํ†ต๊ณผํ•˜๋Š” ๊ด‘์„ ๋“ค์„ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ํ•˜๋Š” ์–‡์€ ๋ Œ์ฆˆ ์นด๋ฉ”๋ผ ๋ชจ๋ธ(thin lens camera model)์ด ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์„ฑ๋Šฅ์— ์ง์ ‘์ ์œผ๋กœ ์˜ํ–ฅ์„ ๋ผ์น˜๋Š” ๋ Œ์ฆˆ ์ƒ˜ํ”Œ์€ ์ตœ์ ์˜ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ์ตœ์†Œํ•œ์˜ ๊ฐœ์ˆ˜๋ฅผ ๊ฐ€์ง€๊ณ  ์•จ๋ฆฌ์–ด์‹ฑ(aliasing)์ด ์—†๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ GPU ๊ธฐ๋ฐ˜ ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ• ํŒŒ์ดํ”„๋ผ์ธ ๋‚ด์—์„œ ์ „์ฒ˜๋ฆฌ ์—†์ด ๊ตฌํ˜„๋œ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณผ๋ฅจ ๊ด‘์„ ํˆฌ์‚ฌ๋ฒ•์˜ ๋ชจ๋“  ๊ฐ€์†ํ™” ๊ธฐ๋ฒ•์„ ์ œํ•œ์—†์ด ์ ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ๊ฐ€์† ๊ธฐ์ˆ ๋กœ ๋ˆ„์ง„ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง(progressive lens sampling)์„ ์‚ฌ์šฉํ•˜๋Š” ๋‹ค์ค‘ ํŒจ์Šค ๋ Œ๋”๋ง(multi-pass rendering)์„ ์ œ์•ˆํ•œ๋‹ค. ๋” ๋งŽ์€ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋“ค์ด ์—ฌ๋Ÿฌ ๋ Œ๋” ํŒจ์Šค๋“ค์„ ๊ฑฐ์น˜๋ฉด์„œ ์ ์ง„์ ์œผ๋กœ ์‚ฌ์šฉ๋œ๋‹ค. ๊ฐ ํ”ฝ์…€์€ ์ฐฉ๋ž€์›(circle of confusion)์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์˜ˆ์ธก๋œ ์ตœ๋Œ€ ํ๋ฆผ ์ •๋„์— ๋”ฐ๋ผ ๋‹ค๋ฅธ ์ตœ์ข… ๋ Œ๋”๋ง ํŒจ์Šค๋ฅผ ๊ฐ–๋Š”๋‹ค. ์ด ๊ธฐ๋ฒ•์€ ๊ฑฐ๋ฆฌ์— ๋”ฐ๋ฅธ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ์˜ ํ๋ฆผ ์ •๋„์— ๋”ฐ๋ผ ๊ฐ ํ”ฝ์…€์— ๋‹ค๋ฅธ ๊ฐœ์ˆ˜์˜ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ์„ ์ ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฐ€์†ํ™” ๋ฐฉ๋ฒ•์€ ๋ถˆํ•„์š”ํ•œ ๋ Œ์ฆˆ ์ƒ˜ํ”Œ๋ง์„ ์ค„์ด๊ณ  GPU์˜ ์บ์‹œ(cache) ์ ์ค‘๋ฅ ์„ ๋†’์—ฌ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์—์„œ ์ƒํ˜ธ์ž‘์šฉ์ด ๊ฐ€๋Šฅํ•œ ํ”„๋ ˆ์ž„ ์†๋„๋กœ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ๋ Œ๋”๋ง ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹ค. ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ ์‹คํ—˜์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์€ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์‚ฌ์‹ค์ ์ธ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ƒ์„ฑํ–ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ฒฐ๊ณผ๋Š” ์šฐ๋ฆฌ์˜ ๋ฐฉ๋ฒ•์ด ์˜คํ”„๋ผ์ธ ์ด๋ฏธ์ง€ ํ•ฉ์„ฑ ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•œ ํ’ˆ์งˆ์˜ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ํšจ๊ณผ๋ฅผ ์ƒ์„ฑํ•˜๋ฉด์„œ ์ง์ ‘ ๋ณผ๋ฅจ ๋ Œ๋”๋ง์˜ ๊ธฐ์กด ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ๋ Œ๋”๋ง ๋ฐฉ๋ฒ•๋ณด๋‹ค ์ตœ๋Œ€ 12๋ฐฐ๊นŒ์ง€ ๋น ๋ฅด๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ค€๋‹ค.CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Dissertation Goals 5 1.3 Main Contributions 6 1.4 Organization of Dissertation 8 CHAPTER 2 RELATED WORK 9 2.1 Depth of Field on Surface Rendering 10 2.1.1 Object-Space Approaches 11 2.1.2 Image-Space Approaches 15 2.2 Depth of Field on Volume Rendering 26 2.2.1 Blur Filtering on Slice-Based Volume Rendering 28 2.2.2 Stochastic Sampling on Volume Ray Casting 30 CHAPTER 3 DEPTH OF FIELD VOLUME RAY CASTING 33 3.1 Fundamentals 33 3.1.1 Depth of Field 34 3.1.2 Camera Models 36 3.1.3 Direct Volume Rendering 42 3.2 Geometry Setup 48 3.3 Lens Sampling Strategy 53 3.3.1 Sampling Techniques 53 3.3.2 Disk Mapping 57 3.4 CoC-Based Multi-Pass Rendering 60 3.4.1 Progressive Lens Sample Sequence 60 3.4.2 Final Render Pass Determination 62 CHAPTER 4 GPU IMPLEMENTATION 66 4.1 Overview 66 4.2 Rendering Pipeline 67 4.3 Focal Plane Transformation 74 4.4 Lens Sample Transformation 76 CHAPTER 5 EXPERIMENTAL RESULTS 78 5.1 Number of Lens Samples 79 5.2 Number of Render Passes 82 5.3 Render Pass Parameter 84 5.4 Comparison with Previous Methods 87 CHAPTER 6 CONCLUSION 97 Bibliography 101 Appendix 111Docto

    Noise-based volume rendering for the visualization of multivariate volumetric data

    Get PDF
    • โ€ฆ
    corecore