1,111 research outputs found

    A Novel Deep Learning Framework for Internal Gross Target Volume Definition from 4D Computed Tomography of Lung Cancer Patients

    Full text link
    In this paper, we study the reliability of a novel deep learning framework for internal gross target volume (IGTV) delineation from four-dimensional computed tomography (4DCT), which is applied to patients with lung cancer treated by Stereotactic Body Radiation Therapy (SBRT). 77 patients who underwent SBRT followed by 4DCT scans were incorporated in a retrospective study. The IGTV_DL was delineated using a novel deep machine learning algorithm with a linear exhaustive optimal combination framework, for the purpose of comparison, three other IGTVs base on common methods was also delineated, we compared the relative volume difference (RVI), matching index (MI) and encompassment index (EI) for the above IGTVs. Then, multiple parameter regression analysis assesses the tumor volume and motion range as clinical influencing factors in the MI variation. Experimental results demonstrated that the deep learning algorithm with linear exhaustive optimal combination framework has a higher probability of achieving optimal MI compared with other currently widely used methods. For patients after simple breathing training by keeping the respiratory frequency in 10 BMP, the four phase combinations of 0%, 30%, 50% and 90% can be considered as a potential candidate for an optimal combination to synthesis IGTV in all respiration amplitudes

    Statistical Shape Modelling and Segmentation of the Respiratory Airway

    Get PDF
    The human respiratory airway consists of the upper (nasal cavity, pharynx) and the lower (trachea, bronchi) respiratory tracts. Accurate segmentation of these two airway tracts can lead to better diagnosis and interpretation of airway-specific diseases, and lead to improvement in the localization of abnormal metabolic or pathological sites found within and/or surrounding the respiratory regions. Due to the complexity and the variability displayed in the anatomical structure of the upper respiratory airway along with the challenges in distinguishing the nasal cavity from non-respiratory regions such as the paranasal sinuses, it is difficult for existing algorithms to accurately segment the upper airway without manual intervention. This thesis presents an implicit non-parametric framework for constructing a statistical shape model (SSM) of the upper and lower respiratory tract, capable of distinct shape generation and be adapted for segmentation. An SSM of the nasal cavity was successfully constructed using 50 nasal CT scans. The performance of the SSM was evaluated for compactness, specificity and generality. An averaged distance error of 1.47 mm was measured for the generality assessment. The constructed SSM was further adapted with a modified locally constrained random walk algorithm to segment the nasal cavity. The proposed algorithm was evaluated on 30 CT images and outperformed comparative state-of-the-art and conventional algorithms. For the lower airway, a separate algorithm was proposed to automatically segment the trachea and bronchi, and was designed to tolerate the image characteristics inherent in low-contrast CT images. The algorithm was evaluated on 20 clinical low-contrast CT from PET-CT patient studies and demonstrated better performance (87.1±2.8 DSC and distance error of 0.37±0.08 mm) in segmentation results against comparative state-of-the-art algorithms

    Multi-Contrast Computed Tomography Atlas of Healthy Pancreas

    Full text link
    With the substantial diversity in population demographics, such as differences in age and body composition, the volumetric morphology of pancreas varies greatly, resulting in distinctive variations in shape and appearance. Such variations increase the difficulty at generalizing population-wide pancreas features. A volumetric spatial reference is needed to adapt the morphological variability for organ-specific analysis. Here, we proposed a high-resolution computed tomography (CT) atlas framework specifically optimized for the pancreas organ across multi-contrast CT. We introduce a deep learning-based pre-processing technique to extract the abdominal region of interests (ROIs) and leverage a hierarchical registration pipeline to align the pancreas anatomy across populations. Briefly, DEEDs affine and non-rigid registration are performed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas template, multi-contrast modality CT scans of 443 subjects (without reported history of pancreatic disease, age: 15-50 years old) are processed. Comparing with different registration state-of-the-art tools, the combination of DEEDs affine and non-rigid registration achieves the best performance for the pancreas label transfer across all contrast phases. We further perform external evaluation with another research cohort of 100 de-identified portal venous scans with 13 organs labeled, having the best label transfer performance of 0.504 Dice score in unsupervised setting. The qualitative representation (e.g., average mapping) of each phase creates a clear boundary of pancreas and its distinctive contrast appearance. The deformation surface renderings across scales (e.g., small to large volume) further illustrate the generalizability of the proposed atlas template

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Reproducibility Study of Tumor Biomarkers Extracted from Positron Emission To-mography Images with 18F-Fluorodeoxyglucose

    Get PDF
    Introduction and aim Cancer is one of the main causes of death worldwide. Tumor diagnosis, staging, surveillance, prognosis and access to the response to therapy are critical when it comes to plan and analyze the optimal treatment strategies of cancer diseases. 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography (PET) imaging has provided some reliable prognostic factors in several cancer types, by extracting quantitative measures from the images obtained in clinics. The recent addition of digital equipment to the clinical armamentarium of PET leads to some concerns regarding inter-device data variability. Consequently, the reproducibility assess-ment of the tumor features, usually used in clinics and research, extracted from images acquired in an analog and new digital PET equipment is of paramount importance for use of multi-scanner studies in longitudinal patient’s studies. The aim of this study was to evaluate the inter-equipment reliability of a set of 25 lesional features commonly used in clinics and research. Material and methods In order to access the features agreement, a dual imaging protocol was designed. Whole-body 18F-FDG PET images from 53 oncological patients were acquired, after a single 18F-FDG injection, with two devices alternatively: Philips Vereos Digital PET/CT (VE-REOS with three different reconstruction protocols- digital) and Philips GEMINI TF-16 (GEM-INI with single standard reconstruction protocol- analog). A nuclear medicine physician identi-fied 283 18F-FDG avid lesions. Then, all lesions (both equipment) were automatically segmented based on a Bayesian classifier optimized to this study. In the total, 25 features (first order statistics and geometric features) were computed and compared. The intraclass correlation coefficient (ICC) was used as measure of agreement. Results A high agreement (ICC > 0.75) was obtained for most of the lesion features pulled out from both devices imaging data, for all (GEMINI vs VEREOS) reconstructions. The lesion fea-tures most frequently used, maximum standardized uptake value, metabolic tumor volume, and total lesion glycolysis reached maximum ICC of 0.90, 0.98 and 0.97, respectively. Conclusions Under controlled acquisition and reconstruction parameters, most of the features studied can be used for research and clinical work, whenever multiple scanner (e.g. VEREOS and GEMINI) studies, mainly during longitudinal patient evaluation, are used

    Segmentation of striatal brain structures from high resolution pet images

    Get PDF
    Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and ComputersWe propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results
    • …
    corecore