140,804 research outputs found
Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy
Purpose: To develop an algorithm for real-time volumetric image
reconstruction and 3D tumor localization based on a single x-ray projection
image for lung cancer radiotherapy. Methods: Given a set of volumetric images
of a patient at N breathing phases as the training data, we perform deformable
image registration between a reference phase and the other N-1 phases,
resulting in N-1 deformation vector fields (DVFs). These DVFs can be
represented efficiently by a few eigenvectors and coefficients obtained from
principal component analysis (PCA). By varying the PCA coefficients, we can
generate new DVFs, which, when applied on the reference image, lead to new
volumetric images. We then can reconstruct a volumetric image from a single
projection image by optimizing the PCA coefficients such that its computed
projection matches the measured one. The 3D location of the tumor can be
derived by applying the inverted DVF on its position in the reference image.
Our algorithm was implemented on graphics processing units (GPUs) to achieve
real-time efficiency. We generated the training data using a realistic and
dynamic mathematical phantom with 10 breathing phases. The testing data were
360 cone beam projections corresponding to one gantry rotation, simulated using
the same phantom with a 50% increase in breathing amplitude. Results: The
average relative image intensity error of the reconstructed volumetric images
is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 mm +/- 0.5 mm.
On an NVIDIA Tesla C1060 GPU card, the average computation time for
reconstructing a volumetric image from each projection is 0.24 seconds (range:
0.17 and 0.35 seconds). Conclusions: We have shown the feasibility of
reconstructing volumetric images and localizing tumor positions in 3D in near
real-time from a single x-ray image.Comment: 8 pages, 3 figures, submitted to Medical Physics Lette
Simultaneous Multiple Surface Segmentation Using Deep Learning
The task of automatically segmenting 3-D surfaces representing boundaries of
objects is important for quantitative analysis of volumetric images, and plays
a vital role in biomedical image analysis. Recently, graph-based methods with a
global optimization property have been developed and optimized for various
medical imaging applications. Despite their widespread use, these require human
experts to design transformations, image features, surface smoothness priors,
and re-design for a different tissue, organ or imaging modality. Here, we
propose a Deep Learning based approach for segmentation of the surfaces in
volumetric medical images, by learning the essential features and
transformations from training data, without any human expert intervention. We
employ a regional approach to learn the local surface profiles. The proposed
approach was evaluated on simultaneous intraretinal layer segmentation of
optical coherence tomography (OCT) images of normal retinas and retinas
affected by age related macular degeneration (AMD). The proposed approach was
validated on 40 retina OCT volumes including 20 normal and 20 AMD subjects. The
experiments showed statistically significant improvement in accuracy for our
approach compared to state-of-the-art graph based optimal surface segmentation
with convex priors (G-OSC). A single Convolution Neural Network (CNN) was used
to learn the surfaces for both normal and diseased images. The mean unsigned
surface positioning errors obtained by G-OSC method 2.31 voxels (95% CI
2.02-2.60 voxels) was improved to voxels (95% CI 1.14-1.40 voxels) using
our new approach. On average, our approach takes 94.34 s, requiring 95.35 MB
memory, which is much faster than the 2837.46 s and 6.87 GB memory required by
the G-OSC method on the same computer system.Comment: 8 page
Synergistic Visualization And Quantitative Analysis Of Volumetric Medical Images
The medical diagnosis process starts with an interview with the patient, and continues with the physical exam. In practice, the medical professional may require additional screenings to precisely diagnose. Medical imaging is one of the most frequently used non-invasive screening methods to acquire insight of human body. Medical imaging is not only essential for accurate diagnosis, but also it can enable early prevention. Medical data visualization refers to projecting the medical data into a human understandable format at mediums such as 2D or head-mounted displays without causing any interpretation which may lead to clinical intervention. In contrast to the medical visualization, quantification refers to extracting the information in the medical scan to enable the clinicians to make fast and accurate decisions. Despite the extraordinary process both in medical visualization and quantitative radiology, efforts to improve these two complementary fields are often performed independently and synergistic combination is under-studied. Existing image-based software platforms mostly fail to be used in routine clinics due to lack of a unified strategy that guides clinicians both visually and quan- titatively. Hence, there is an urgent need for a bridge connecting the medical visualization and automatic quantification algorithms in the same software platform. In this thesis, we aim to fill this research gap by visualizing medical images interactively from anywhere, and performing a fast, accurate and fully-automatic quantification of the medical imaging data. To end this, we propose several innovative and novel methods. Specifically, we solve the following sub-problems of the ul- timate goal: (1) direct web-based out-of-core volume rendering, (2) robust, accurate, and efficient learning based algorithms to segment highly pathological medical data, (3) automatic landmark- ing for aiding diagnosis and surgical planning and (4) novel artificial intelligence algorithms to determine the sufficient and necessary data to derive large-scale problems
Brain Tumor Synthetic Segmentation in 3D Multimodal MRI Scans
The magnetic resonance (MR) analysis of brain tumors is widely used for
diagnosis and examination of tumor subregions. The overlapping area among the
intensity distribution of healthy, enhancing, non-enhancing, and edema regions
makes the automatic segmentation a challenging task. Here, we show that a
convolutional neural network trained on high-contrast images can transform the
intensity distribution of brain lesions in its internal subregions.
Specifically, a generative adversarial network (GAN) is extended to synthesize
high-contrast images. A comparison of these synthetic images and real images of
brain tumor tissue in MR scans showed significant segmentation improvement and
decreased the number of real channels for segmentation. The synthetic images
are used as a substitute for real channels and can bypass real modalities in
the multimodal brain tumor segmentation framework. Segmentation results on
BraTS 2019 dataset demonstrate that our proposed approach can efficiently
segment the tumor areas. In the end, we predict patient survival time based on
volumetric features of the tumor subregions as well as the age of each case
through several regression models
Segmentasi Citra Secara Semi-otomatis Untuk Visualisasi Volumetrik Citra Ct-scan Pelvis
Semi-Automatic Image Segmentation for Volumetric Visualization of Pelvis CT Scan-Images. The currentdevelopment of computerized tomography (CT) has enable us to obtain cross sectional image using multi slicingtechniques in an order of few seconds. The obtained images represent several tissue structures on cross section slicebeing imaged. One challenge to help diagnosis using CT images is extracting an anatomic structure of interest using amethod of image segmentation and volumetric visualization with the assistance of computers. In case of volumetricvisualization of pelvis bones extracted from multi-slice CT images, whole images which are containing part of pelvisbone structures must be segmented. In this research, an image segmentation technique based on active contour isimplemented for semi-automatic multi slice image segmentation. Image segmentation steps are initialized with a definemodel of 2D curve on the first slice image manually. Next, its model curve is deformed to reach the final result of 2Dcurve that fits to boundary edges of pelvis bone image. The final result of 2D curve on previous slice image was used asan initialization model of 2D curve on the next slice images. This process will continue until the final slice image. Thissegmentation method was compared with the segmentation method based on threshold from homogenous intensitydistribution and manual segmentation method. Quantitative analysis from the results of segmentation on each slice andqualitative analysis on the representation of volumetric visualization are performed in this research
Interactive volumetric segmentation for textile micro-tomography data using wavelets and nonlocal means
This work addresses segmentation of volumetric images of woven carbon fiber textiles from micro-tomography data. We propose a semi-supervised algorithm to classify carbon fibers that requires sparse input as opposed to completely labeled images. The main contributions are: (a) design of effective discriminative classifiers, for three-dimensional textile samples, trained on wavelet features for segmentation; (b) coupling of previous step with nonlocal means as simple, efficient alternative to the Potts model; and (c) demonstration of reuse of classifier to diverse samples containing similar content. We evaluate our work by curating test sets of voxels in the absence of a complete ground truth mask. The algorithm obtains an average 0.95 F1 score on test sets and average F1 score of 0.93 on new samples. We conclude with discussion of failure cases and propose future directions toward analysis of spatiotemporal high-resolution micro-tomography images
CEG 739: Medical Image Analysis
This course discusses applications of image analysis in medical imaging. Methods for analysis of both 2-D and 3-D (volumetric} images are covered
Deep convolutional neural networks for segmenting 3D in vivo multiphoton images of vasculature in Alzheimer disease mouse models
The health and function of tissue rely on its vasculature network to provide
reliable blood perfusion. Volumetric imaging approaches, such as multiphoton
microscopy, are able to generate detailed 3D images of blood vessels that could
contribute to our understanding of the role of vascular structure in normal
physiology and in disease mechanisms. The segmentation of vessels, a core image
analysis problem, is a bottleneck that has prevented the systematic comparison
of 3D vascular architecture across experimental populations. We explored the
use of convolutional neural networks to segment 3D vessels within volumetric in
vivo images acquired by multiphoton microscopy. We evaluated different network
architectures and machine learning techniques in the context of this
segmentation problem. We show that our optimized convolutional neural network
architecture, which we call DeepVess, yielded a segmentation accuracy that was
better than both the current state-of-the-art and a trained human annotator,
while also being orders of magnitude faster. To explore the effects of aging
and Alzheimer's disease on capillaries, we applied DeepVess to 3D images of
cortical blood vessels in young and old mouse models of Alzheimer's disease and
wild type littermates. We found little difference in the distribution of
capillary diameter or tortuosity between these groups, but did note a decrease
in the number of longer capillary segments () in aged animals as
compared to young, in both wild type and Alzheimer's disease mouse models.Comment: 34 pages, 9 figure
- …