170 research outputs found

    Multiclass Bone Segmentation of PET/CT Scans for Automatic SUV Extraction

    Get PDF
    In this thesis I present an automated framework for segmentation of bone structures from dual modality PET/CT scans and further extraction of SUV measurements. The first stage of this framework consists of a variant of the 3D U-Net architecture for segmentation of three bone structures: vertebral body, pelvis, and sternum. The dataset for this model consists of annotated slices from the CT scans retrieved from the study of post-HCST patients and the 18F-FLT radiotracer, which are undersampled volumes due to the low-dose radiation used during the scanning. The mean Dice scores obtained by the proposed model are 0.9162, 0.9163, and 0.8721 for the vertebral body, pelvis, and sternum class respectively. The next step of the proposed framework consists of identifying the individual vertebrae, which is a particularly difficult task due to the low resolution of the CT scans in the axial dimension. To address this issue, I present an iterative algorithm for instance segmentation of vertebral bodies, based on anatomical priors of the spine for detecting the starting point of a vertebra. The spatial information contained in the CT and PET scans is used to translate the resulting masks to the PET image space and extract SUV measurements. I then present a CNN model based on the DenseNet architecture that, for the first time, classifies the spatial distribution of SUV within the marrow cavities of the vertebral bodies as normal engraftment or possible relapse. With an AUC of 0.931 and an accuracy of 92% obtained on real patient data, this method shows good potential as a future automated tool to assist in monitoring the recovery process of HSCT patients

    Automatic Semantic Segmentation of the Lumbar Spine: Clinical Applicability in a Multi-parametric and Multi-centre Study on Magnetic Resonance Images

    Full text link
    One of the major difficulties in medical image segmentation is the high variability of these images, which is caused by their origin (multi-centre), the acquisition protocols (multi-parametric), as well as the variability of human anatomy, the severity of the illness, the effect of age and gender, among others. The problem addressed in this work is the automatic semantic segmentation of lumbar spine Magnetic Resonance images using convolutional neural networks. The purpose is to assign a class label to each pixel of an image. Classes were defined by radiologists and correspond to different structural elements like vertebrae, intervertebral discs, nerves, blood vessels, and other tissues. The proposed network topologies are variants of the U-Net architecture. Several complementary blocks were used to define the variants: Three types of convolutional blocks, spatial attention models, deep supervision and multilevel feature extractor. This document describes the topologies and analyses the results of the neural network designs that obtained the most accurate segmentations. Several of the proposed designs outperform the standard U-Net used as baseline, especially when used in ensembles where the output of multiple neural networks is combined according to different strategies.Comment: 19 pages, 9 Figures, 8 Tables; Supplementary Material: 6 pages, 8 Table

    Light-convolution dense selection u-net (Lds u-net) for ultrasound lateral bony feature segmentation

    Full text link
    Scoliosis is a widespread medical condition where the spine becomes severely deformed and bends over time. It mostly affects young adults and may have a permanent impact on them. A periodic assessment, using a suitable modality, is necessary for its early detection. Conventionally, the usually employed modalities include X-ray and MRI, which employ ionising radiation and are expensive. Hence, a non-radiating 3D ultrasound imaging technique has been developed as a safe and economic alternative. However, ultrasound produces low-contrast images that are full of speckle noise, and skilled intervention is necessary for their processing. Given the prevalent occurrence of scoliosis and the limitations of scalability of human expert interventions, an automatic, fast, and low-computation assessment technique is being developed for mass scoliosis diagnosis. In this paper, a novel hybridized light-weight convolutional neural network architecture is presented for automatic lateral bony feature identification, which can help to develop a fully-fledged automatic scoliosis detection system. The proposed architecture, Light-convolution Dense Selection U-Net (LDS U-Net), can accurately segment ultrasound spine lateral bony features, from noisy images, thanks to its capabilities of smartly selecting only the useful information and extracting rich deep layer features from the input image. The proposed model is tested using a dataset of 109 spine ultrasound images. The segmentation result of the proposed network is compared with basic U-Net, Attention U-Net, and MultiResUNet using various popular segmentation indices. The results show that LDS U-Net provides a better segmentation performance compared to the other models. Additionally, LDS U-Net requires a smaller number of parameters and less memory, making it suitable for a large-batch screening process of scoliosis without a high computational requirement

    Deep Reinforcement Learning in Medical Object Detection and Segmentation

    Get PDF
    Medical object detection and segmentation are crucial pre-processing steps in the clinical workflow for diagnosis and therapy planning. Although deep learning methods have achieved considerable performance in this field, they impose several shortcomings, such as computational limitations, sub-optimal parameter optimization, and weak generalization. Deep reinforcement learning as the newest artificial intelligence algorithm has great potential to address the limitation of traditional deep learning methods, as well as obtaining accurate detection and segmentation results. Deep reinforcement learning has a cognitive-like process to propose the area of desirable objects, thereby facilitating accurate object detection and segmentation. In this thesis, we deploy deep reinforcement learning into two challenging and representative medical object detection and segmentation tasks: 1) Sequential-Conditional Reinforcement Learning (SCRL) for vertebral body detection and segmentation by modeling the spine anatomy with deep reinforcement learning; 2) Weakly-Supervised Teacher-Student network (WSTS) for liver tumor segmentation from the non-enhanced image by transferring tumor knowledge from the enhanced image with deep reinforcement learning. The experiment indicates our methods are effective and outperform state-of-art deep learning methods. Therefore, this thesis improves object detection and segmentation accuracy and offers researchers a novel approach based on deep reinforcement learning in medical image analysis

    Automatic Bone Structure Segmentation of Under-Sampled CT/FLT-PET Volumes for HSCT Patients

    Get PDF
    In this thesis I present a pipeline for the instance segmentation of vertebral bodies from joint CT/FLT-PET image volumes that have been purposefully under-sampled along the axial direction to limit radiation exposure to vulnerable HSCT patients. The under-sampled image data makes the segmentation of individual vertebral bodies a challenging task, as the boundaries between the vertebrae in the thoracic and cervical spine regions are not well resolved in the CT modality, escaping detection by both humans and algorithms. I train a multi-view, multi-class U-Net to perform semantic segmentation of the vertebral body, sternum, and pelvis object classes. These bone structures contain marrow cavities that, when viewed in the FLT-PET modality, allow us to investigate hematopoietic cellular proliferation in HSCT patients non-invasively. The proposed convnet model achieves a Dice score of 0.9245 for the vertebral body object class and shows qualitatively similar performance on the pelvis and sternum object classes. The final instance segmentation is realized by combining the initial vertebral body semantic segmentation with the associated FLT-PET image data, where the vertebral boundaries become well-resolved by the 28th day post-transplant. The vertebral boundary detection algorithm is a hand-crafted spatial filter that enforces vertebra span as an anatomical prior, and it performs similar to a human for the detection of all but one vertebral boundary in the entirety of the HSCT patient dataset. In addition to the segmentation model, I propose, design, and test a “drop-in” replacement up-sampling module that allows state-of-the-art super-resolution convnets to be used for purely asymmetric upscaling tasks (tasks where only one image dimension is scaled while the other is held to unity). While the asymmetric SR convnet I develop falls short of the initial goal, where it was to be used to enhance the unresolved vertebral boundaries of the under-sampled CT image data, it does objectively upscale medical image data more accurately than naïve interpolation methods and may be useful as a pre-processing step for other medical imaging tasks involving anisotropic pixels or voxels

    Medical Image Segmentation with Deep Convolutional Neural Networks

    Get PDF
    Medical imaging is the technique and process of creating visual representations of the body of a patient for clinical analysis and medical intervention. Healthcare professionals rely heavily on medical images and image documentation for proper diagnosis and treatment. However, manual interpretation and analysis of medical images are time-consuming, and inaccurate when the interpreter is not well-trained. Fully automatic segmentation of the region of interest from medical images has been researched for years to enhance the efficiency and accuracy of understanding such images. With the advance of deep learning, various neural network models have gained great success in semantic segmentation and sparked research interests in medical image segmentation using deep learning. We propose three convolutional frameworks to segment tissues from different types of medical images. Comprehensive experiments and analyses are conducted on various segmentation neural networks to demonstrate the effectiveness of our methods. Furthermore, datasets built for training our networks and full implementations are published
    corecore