6 research outputs found

    혈관 구조 분석 기반 혈류선 추출과 불투명도 변조를 이용한 혈류 가시화 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2016. 2. 신영길.With recent advances in acquisition and simulation of blood flow data, blood flow visualization has been widely used in medical imaging for the diagnosis and treatment of pathological vessels. The integral line based method has been most commonly employed to depict hemodynamic data because it exhibits a long term flow behavior useful for flow analysis. This method generates integral lines to be used as a basis for graphical representation by tracing the trajectory of a massless particle released on the vector field through a numerical integration. However, there are several unsolved problems when this previous method is applied to thin curved vascular structures. The first one is to locate a seeding plane, which is manually performed in the existing methods, thus yielding inconsistent visual results. The second one is the early termination of a line integration due to locally reversed flow and narrow tubular structure, which results in short flowlines comparing with the vessel length. And the last one is the line occlusion caused by the dense depiction of flowlines. Additionally, in blood flow visualization for clinical uses, it is essential to apparently exhibit abnormal flow relevant to vessel diseases. In this paper, we present an enhanced method that overcomes problems related to the integration based flow visualization and depicts hemodynamics in a more informative way for assisting the diagnosis process. Using the fact that blood flow passes through the inlet or outlet but is blocked by vessel wall, we firstly identify the vessel inlet or outlet by the orthogonality metric between flow velocity vector and vessel surface normal vector. Then, we generate seed points on the detected inlet or outlet by Poisson disk sampling. Therefore, we can achieve the automatic seeding that leads to a consistent and faster flow depiction by skipping the manual location of a seeding plane to initiate the line integration. In addition, we resolve the early terminated line integration by applying the tracing direction adaptively based on flow direction at each seed point and by performing the additional seeding near the terminated location. This solution enables to yield length-extended flowlines, which contribute to faithful flow visualization. Based on the observation that blood flow usually follows the vessel track if there is no obstacle or leak in the middle of a passage, we define the representative flowline for each branch by the vessel centerline. Then, we render flowlines by assigning the opacity according to their shape similarity with the vessel centerline so that flowlines similar to the vessel centerline are shown transparently, while different ones opaquely. Accordingly, our opacity modulation method enables flowlines with unusual flow pattern to appear more noticeable, while minimizing visual clutter and line occlusion. Finally, we introduce HSV (hue, saturation, value) color coding to simultaneously exhibit flow attributes such as local speed and residence time. This color coding gives a more realistic fading effect on the older particles or line segments by attenuating the saturation according to the residence time. Hence, it supports users in comprehending intuitively multiple information at once. Experimental results show that our technique is well suitable to depict blood flow in vascular structures.Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Problem Statement 3 1.3 Main Contribtion 7 1.4 Organization of the Dissertation 8 Chapter 2 Related Works 9 2.1 Flow and Velocity Vector 9 2.2 Flow Visualization 10 2.3 Blood Flow Visualization 16 2.3.1 Geometric Method 16 2.3.2 Feature-Based Method 18 2.3.3 Partition-Based Method 19 Chapter 3 Integration based Flowline Extraction 22 3.1 Overview 22 3.2 Seeding 23 3.3 Barycentric Coordinate Conversion 24 3.4 Cell Searching 26 3.5 Velocity Vector Calculation 27 3.6 Advection 28 3.7 Step Size Adaptation 30 Chapter 4 Blood Flow Visualization using Flow and Geometric Analysis 32 4.1 Preprocessing 33 4.2 Inlet or Outlet based Seeding 35 4.3 Tracing 39 4.3.1 Flow based Bidirectional Tracing 39 4.3.2 Additional Seeding for Length Extended Line Integration 41 4.4 Opacity Modulation 43 4.4.1 Global Opacity 45 4.4.2 Local Opacity 46 4.4.3 Opacity Adjustment 52 4.4.4 Blending 53 4.5 HSV Color Coding 54 4.6 Vessel Rendering 58 4.6.1 Vessel Smoothing 59 4.6.2 Vessel Contour Enhancement 60 4.7 Flowline Drawing 61 4.7.1 Line Illumination 61 4.7.2 Line Halo 63 4.8 Animation 64 Chapter 5 Experimental Results 67 5.1 Evaluation on Seeding 69 5.2 Evaluation on Tracing 74 5.3 Evaluation on Opacity Modulation 82 5.4 Parameter Study 85 Chapter 6 Conclusion 87 Bibliography 89 초 록 99Docto

    Dynamic Volume Rendering of Functional Medical Data on Dissimilar Hardware Platforms

    Get PDF
    In the last 30 years, medical imaging has become one of the most used diagnostic tools in the medical profession. Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) technologies have become widely adopted because of their ability to capture the human body in a non-invasive manner. A volumetric dataset is a series of orthogonal 2D slices captured at a regular interval, typically along the axis of the body from the head to the feet. Volume rendering is a computer graphics technique that allows volumetric data to be visualized and manipulated as a single 3D object. Iso-surface rendering, image splatting, shear warp, texture slicing, and raycasting are volume rendering methods, each with associated advantages and disadvantages. Raycasting is widely regarded as the highest quality renderer of these methods. Originally, CT and MRI hardware was limited to providing a single 3D scan of the human body. The technology has improved to allow a set of scans capable of capturing anatomical movements like a beating heart. The capturing of anatomical data over time is referred to as functional imaging. Functional MRI (fMRI) is used to capture changes in the human body over time. While fMRI’s can be used to capture any anatomical data over time, one of the more common uses of fMRI is to capture brain activity. The fMRI scanning process is typically broken up into a time consuming high resolution anatomical scan and a series of quick low resolution scans capturing activity. The low resolution activity data is mapped onto the high resolution anatomical data to show changes over time. Academic research has advanced volume rendering and specifically fMRI volume rendering. Unfortunately, academic research is typically a one-off solution to a singular medical case or set of data, causing any advances to be problem specific as opposed to a general capability. Additionally, academic volume renderers are often designed to work on a specific device and operating system under controlled conditions. This prevents volume rendering from being used across the ever expanding number of different computing devices, such as desktops, laptops, immersive virtual reality systems, and mobile computers like phones or tablets. This research will investigate the feasibility of creating a generic software capability to perform real-time 4D volume rendering, via raycasting, on desktop, mobile, and immersive virtual reality platforms. Implementing a GPU-based 4D volume raycasting method for mobile devices will harness the power of the increasing number of mobile computational devices being used by medical professionals. Developing support for immersive virtual reality can enhance medical professionals’ interpretation of 3D physiology with the additional depth information provided by stereoscopic 3D. The results of this research will help expand the use of 4D volume rendering beyond the traditional desktop computer in the medical field. Developing the same 4D volume rendering capabilities across dissimilar platforms has many challenges. Each platform relies on their own coding languages, libraries, and hardware support. There are tradeoffs between using languages and libraries native to each platform and using a generic cross-platform system, such as a game engine. Native libraries will generally be more efficient during application run-time, but they require different coding implementations for each platform. The decision was made to use platform native languages and libraries in this research, whenever practical, in an attempt to achieve the best possible frame rates. 4D volume raycasting provides unique challenges independent of the platform. Specifically, fMRI data loading, volume animation, and multiple volume rendering. Additionally, real-time raycasting has never been successfully performed on a mobile device. Previous research relied on less computationally expensive methods, such as orthogonal texture slicing, to achieve real-time frame rates. These challenges will be addressed as the contributions of this research. The first contribution was exploring the feasibility of generic functional data input across desktop, mobile, and immersive virtual reality. To visualize 4D fMRI data it was necessary to build in the capability to read Neuroimaging Informatics Technology Initiative (NIfTI) files. The NIfTI format was designed to overcome limitations of 3D file formats like DICOM and store functional imagery with a single high-resolution anatomical scan and a set of low-resolution anatomical scans. Allowing input of the NIfTI binary data required creating custom C++ routines, as no object oriented APIs freely available for use existed. The NIfTI input code was built using C++ and the C++ Standard Library to be both light weight and cross-platform. Multi-volume rendering is another challenge of fMRI data visualization and a contribution of this work. fMRI data is typically broken into a single high-resolution anatomical volume and a series of low-resolution volumes that capture anatomical changes. Visualizing two volumes at the same time is known as multi-volume visualization. Therefore, the ability to correctly align and scale the volumes relative to each other was necessary. It was also necessary to develop a compositing method to combine data from both volumes into a single cohesive representation. Three prototype applications were built for the different platforms to test the feasibility of 4D volume raycasting. One each for desktop, mobile, and virtual reality. Although the backend implementations were required to be different between the three platforms, the raycasting functionality and features were identical. Therefore, the same fMRI dataset resulted in the same 3D visualization independent of the platform itself. Each platform uses the same NIfTI data loader and provides support for dataset coloring and windowing (tissue density manipulation). The fMRI data can be viewed changing over time by either animation through the time steps, like a movie, or using an interface slider to “scrub” through the different time steps of the data. The prototype applications data load times and frame rates were tested to determine if they achieved the real-time interaction goal. Real-time interaction was defined by achieving 10 frames per second (fps) or better, based on the work of Miller [1]. The desktop version was evaluated on a 2013 MacBook Pro running OS X 10.12 with a 2.6 GHz Intel Core i7 processor, 16 GB of RAM, and a NVIDIA GeForce GT 750M graphics card. The immersive application was tested in the C6 CAVE™, a 96 graphics node computer cluster comprised of NVIDIA Quadro 6000 graphics cards running Red Hat Enterprise Linux. The mobile application was evaluated on a 2016 9.7” iPad Pro running iOS 9.3.4. The iPad had a 64-bit Apple A9X dual core processor with 2 GB of built in memory. Two different fMRI brain activity datasets with different voxel resolutions were used as test datasets. Datasets were tested using both the 3D structural data, the 4D functional data, and a combination of the two. Frame rates for the desktop implementation were consistently above 10 fps, indicating that real-time 4D volume raycasting is possible on desktop hardware. The mobile and virtual reality platforms were able to perform real-time 3D volume raycasting consistently. This is a marked improvement for 3D mobile volume raycasting that was previously only able to achieve under one frame per second [2]. Both VR and mobile platforms were able to raycast the 4D only data at real-time frame rates, but did not consistently meet 10 fps when rendering both the 3D structural and 4D functional data simultaneously. However, 7 frames per second was the lowest frame rate recorded, indicating that hardware advances will allow consistent real-time raycasting of 4D fMRI data in the near future

    Hybride Gefäßsegmentierung in 4D-Phasenkontrast-MRT-Daten

    Get PDF

    CFD-basierte hämodynamische Untersuchung patientenspezifischer intrakranieller Aneurysmen

    Get PDF
    Magdeburg, Univ., Fak. für Verfahrens- und Systemtechnik, Diss., 2015von Philipp Ber

    Fast interactive exploration of 4D MRI flow data

    No full text
    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times
    corecore