245 research outputs found

    Esophageal tumor segmentation in CT images using a Dilated Dense Attention Unet (DDAUnet)

    Get PDF
    Manual or automatic delineation of the esophageal tumor in CT images is known to be very challenging. This is due to the low contrast between the tumor and adjacent tissues, the anatomical variation of the esophagus, as well as the occasional presence of foreign bodies (e.g. feeding tubes). Physicians therefore usually exploit additional knowledge such as endoscopic findings, clinical history, additional imaging modalities like PET scans. Achieving his additional information is time-consuming, while the results are error-prone and might lead to non-deterministic results. In this paper we aim to investigate if and to what extent a simplified clinical workflow based on CT alone, allows one to automatically segment the esophageal tumor with sufficient quality. For this purpose, we present a fully automatic end-to-end esophageal tumor segmentation method based on convolutional neural networks (CNNs). The proposed network, called Dilated Dense Attention Unet (DDAUnet), leverages spatial and channel attention gates in each dense block to selectively concentrate on determinant feature maps and regions. Dilated convolutional layers are used to manage GPU memory and increase the network receptive field. We collected a dataset of 792 scans from 288 distinct patients including varying anatomies with air pockets, feeding tubes and proximal tumors. Repeatability and reproducibility studies were conducted for three distinct splits of training and validation sets. The proposed network achieved a DSC value of 0.79 +/- 0.20, a mean surface distance of 5.4 +/- 20.2mm and 95% Hausdorff distance of 14.7 +/- 25.0mm for 287 test scans, demonstrating promising results with a simplified clinical workflow based on CT alone. Our code is publicly available via https://github.com/yousefis/DenseUnet_Esophagus_Segmentation.Biological, physical and clinical aspects of cancer treatment with ionising radiatio

    Improved Abdominal Multi-Organ Segmentation via 3D Boundary-Constrained Deep Neural Networks

    Full text link
    Quantitative assessment of the abdominal region from clinically acquired CT scans requires the simultaneous segmentation of abdominal organs. Thanks to the availability of high-performance computational resources, deep learning-based methods have resulted in state-of-the-art performance for the segmentation of 3D abdominal CT scans. However, the complex characterization of organs with fuzzy boundaries prevents the deep learning methods from accurately segmenting these anatomical organs. Specifically, the voxels on the boundary of organs are more vulnerable to misprediction due to the highly-varying intensity of inter-organ boundaries. This paper investigates the possibility of improving the abdominal image segmentation performance of the existing 3D encoder-decoder networks by leveraging organ-boundary prediction as a complementary task. To address the problem of abdominal multi-organ segmentation, we train the 3D encoder-decoder network to simultaneously segment the abdominal organs and their corresponding boundaries in CT scans via multi-task learning. The network is trained end-to-end using a loss function that combines two task-specific losses, i.e., complete organ segmentation loss and boundary prediction loss. We explore two different network topologies based on the extent of weights shared between the two tasks within a unified multi-task framework. To evaluate the utilization of complementary boundary prediction task in improving the abdominal multi-organ segmentation, we use three state-of-the-art encoder-decoder networks: 3D UNet, 3D UNet++, and 3D Attention-UNet. The effectiveness of utilizing the organs' boundary information for abdominal multi-organ segmentation is evaluated on two publically available abdominal CT datasets. A maximum relative improvement of 3.5% and 3.6% is observed in Mean Dice Score for Pancreas-CT and BTCV datasets, respectively.Comment: 15 pages, 16 figures, journal pape

    Signal Processing and Restoration

    Get PDF

    Fast widefield techniques for fluorescence and phase endomicroscopy

    Full text link
    Thesis (Ph.D.)--Boston UniversityEndomicroscopy is a recent development in biomedical optics which gives researchers and physicians microscope-resolution views of intact tissue to complement macroscopic visualization during endoscopy screening. This thesis presents HiLo endomicroscopy and oblique back-illumination endomicroscopy, fast widefield imaging techniques with fluorescence and phase contrast, respectively. Fluorescence imaging in thick tissue is often hampered by strong out-of-focus background signal. Laser scanning confocal endomicroscopy has been developed for optically-sectioned imaging free from background, but reliance on mechanical scanning fundamentally limits the frame rate and represents significant complexity and expense. HiLo is a fast, simple, widefield fluorescence imaging technique which rejects out-of-focus background signal without the need for scanning. It works by acquiring two images of the sample under uniform and structured illumination and synthesizing an optically sectioned result with real-time image processing. Oblique back-illumination microscopy (OBM) is a label-free technique which allows, for the first time, phase gradient imaging of sub-surface morphology in thick scattering tissue with a reflection geometry. OBM works by back-illuminating the sample with the oblique diffuse reflectance from light delivered via off-axis optical fibers. The use of two diametrically opposed illumination fibers allows simultaneous and independent measurement of phase gradients and absorption contrast. Video-rate single-exposure operation using wavelength multiplexing is demonstrated

    Geometry-Aware Latent Representation Learning for Modeling Disease Progression of Barrett's Esophagus

    Full text link
    Barrett's Esophagus (BE) is the only precursor known to Esophageal Adenocarcinoma (EAC), a type of esophageal cancer with poor prognosis upon diagnosis. Therefore, diagnosing BE is crucial in preventing and treating esophageal cancer. While supervised machine learning supports BE diagnosis, high interobserver variability in histopathological training data limits these methods. Unsupervised representation learning via Variational Autoencoders (VAEs) shows promise, as they map input data to a lower-dimensional manifold with only useful features, characterizing BE progression for improved downstream tasks and insights. However, the VAE's Euclidean latent space distorts point relationships, hindering disease progression modeling. Geometric VAEs provide additional geometric structure to the latent space, with RHVAE assuming a Riemannian manifold and S\mathcal{S}-VAE a hyperspherical manifold. Our study shows that S\mathcal{S}-VAE outperforms vanilla VAE with better reconstruction losses, representation classification accuracies, and higher-quality generated images and interpolations in lower-dimensional settings. By disentangling rotation information from the latent space, we improve results further using a group-based architecture. Additionally, we take initial steps towards S\mathcal{S}-AE, a novel autoencoder model generating qualitative images without a variational framework, but retaining benefits of autoencoders such as stability and reconstruction quality
    • …
    corecore