245 research outputs found
Esophageal tumor segmentation in CT images using a Dilated Dense Attention Unet (DDAUnet)
Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with air pockets, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a DSC value of 0.79 +/- 0.20, a mean surface distance of 5.4 +/- 20.2mm and 95% Hausdorff distance of 14.7 +/- 25.0mm for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via https://github.com/yousefis/DenseUnet_Esophagus_Segmentation.Biological, physical and clinical aspects of cancer treatment with ionising radiatio
Recommended from our members
Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue
Ovarian cancer has the lowest survival rate among all gynecologic cancers predominantly due to late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depth-resolved, high-resolution images of biological tissue in real-time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must first be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluate a set of algorithms to segment OCT images of mouse ovaries. We examine five preprocessing techniques and seven segmentation algorithms. While all preprocessing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of
32
%
±
1.2
%
. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of
94.8
%
±
1.2
%
compared with manual segmentation. Even so, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.National Science Foundation Graduate Research Fellowship Program [DGE-1143953]; National Institutes of Health/National Cancer Institute [1R01CA195723]; University of Arizona Cancer Center [3P30CA023074]This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at [email protected]
Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks
Quantitative assessment of the abdominal region from clinically acquired CT
scans requires the simultaneous segmentation of abdominal organs. Thanks to the
availability of high-performance computational resources, deep learning-based
methods have resulted in state-of-the-art performance for the segmentation of
3D abdominal CT scans. However, the complex characterization of organs with
fuzzy boundaries prevents the deep learning methods from accurately segmenting
these anatomical organs. Specifically, the voxels on the boundary of organs are
more vulnerable to misprediction due to the highly-varying intensity of
inter-organ boundaries. This paper investigates the possibility of improving
the abdominal image segmentation performance of the existing 3D encoder-decoder
networks by leveraging organ-boundary prediction as a complementary task. To
address the problem of abdominal multi-organ segmentation, we train the 3D
encoder-decoder network to simultaneously segment the abdominal organs and
their corresponding boundaries in CT scans via multi-task learning. The network
is trained end-to-end using a loss function that combines two task-specific
losses, i.e., complete organ segmentation loss and boundary prediction loss. We
explore two different network topologies based on the extent of weights shared
between the two tasks within a unified multi-task framework. To evaluate the
utilization of complementary boundary prediction task in improving the
abdominal multi-organ segmentation, we use three state-of-the-art
encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The
effectiveness of utilizing the organs' boundary information for abdominal
multi-organ segmentation is evaluated on two publically available abdominal CT
datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean
Dice Score for Pancreas-CT and BTCV datasets, respectively.Comment: 15 pages, 16 figures, journal pape
Fast widefield techniques for fluorescence and phase endomicroscopy
Thesis (Ph.D.)--Boston UniversityEndomicroscopy is a recent development in biomedical optics which gives researchers and physicians microscope-resolution views of intact tissue to complement macroscopic visualization during endoscopy screening. This thesis presents HiLo endomicroscopy and oblique back-illumination endomicroscopy, fast widefield imaging techniques with fluorescence and phase contrast, respectively.
Fluorescence imaging in thick tissue is often hampered by strong out-of-focus background signal. Laser scanning confocal endomicroscopy has been developed for optically-sectioned imaging free from background, but reliance on mechanical scanning fundamentally limits the frame rate and represents significant complexity and expense. HiLo is a fast, simple, widefield fluorescence imaging technique which rejects out-of-focus background signal without the need for scanning. It works by acquiring two images of the sample under uniform and structured illumination and synthesizing an optically sectioned result with real-time image processing.
Oblique back-illumination microscopy (OBM) is a label-free technique which allows, for the first time, phase gradient imaging of sub-surface morphology in thick scattering tissue with a reflection geometry. OBM works by back-illuminating the sample with the oblique diffuse reflectance from light delivered via off-axis optical fibers. The use of two diametrically opposed illumination fibers allows simultaneous and independent measurement of phase gradients and absorption contrast. Video-rate single-exposure operation using wavelength multiplexing is demonstrated
Geometry-Aware Latent Representation Learning for Modeling Disease Progression of Barrett's Esophagus
Barrett's Esophagus (BE) is the only precursor known to Esophageal
Adenocarcinoma (EAC), a type of esophageal cancer with poor prognosis upon
diagnosis. Therefore, diagnosing BE is crucial in preventing and treating
esophageal cancer. While supervised machine learning supports BE diagnosis,
high interobserver variability in histopathological training data limits these
methods. Unsupervised representation learning via Variational Autoencoders
(VAEs) shows promise, as they map input data to a lower-dimensional manifold
with only useful features, characterizing BE progression for improved
downstream tasks and insights. However, the VAE's Euclidean latent space
distorts point relationships, hindering disease progression modeling. Geometric
VAEs provide additional geometric structure to the latent space, with RHVAE
assuming a Riemannian manifold and -VAE a hyperspherical manifold.
Our study shows that -VAE outperforms vanilla VAE with better
reconstruction losses, representation classification accuracies, and
higher-quality generated images and interpolations in lower-dimensional
settings. By disentangling rotation information from the latent space, we
improve results further using a group-based architecture. Additionally, we take
initial steps towards -AE, a novel autoencoder model generating
qualitative images without a variational framework, but retaining benefits of
autoencoders such as stability and reconstruction quality
- …