228 research outputs found

    False-Positive Malignant Diagnosis of Nodule Mimicking Lesions by Computer-Aided Thyroid Nodule Analysis in Clinical Ultrasonography Practice

    Get PDF
    This study aims to test computer-aided diagnosis (CAD) for thyroid nodules in clinical ultrasonography (US) practice with a focus towards identifying thyroid entities associated with CAD system misdiagnoses. Two-hundred patients referred to thyroid US were prospectively enrolled. An experienced radiologist evaluated the thyroid nodules and saved axial images for further offline blinded analysis using a commercially available CAD system. To represent clinical practice, not only true nodules, but mimicking lesions were also included. Fine needle aspiration biopsy (FNAB) was performed according to present guidelines. US features and thyroid entities significantly associated with CAD system misdiagnosis were identified along with the diagnostic accuracy of the radiologist and the CAD system. Diagnostic specificity regarding the radiologist was significantly (p < 0.05) higher than when compared with the CAD system (88.1% vs. 40.5%) while no significant difference was found in the sensitivity (88.6% vs. 80%). Focal inhomogeneities and true nodules in thyroiditis, nodules with coarse calcification and inspissated colloid cystic nodules were significantly (p < 0.05) associated with CAD system misdiagnosis as false-positives. The commercially available CAD system is promising when used to exclude thyroid malignancies, however, it currently may not be able to reduce unnecessary FNABs, mainly due to the false-positive diagnoses of nodule mimicking lesions

    Diagnosis of Thyroid Nodules: Performance of a Deep Learning Convolutional Neural Network Model vs. Radiologists

    Get PDF
    Computer-aided diagnosis (CAD) systems hold potential to improve the diagnostic accuracy of thyroid ultrasound (US). We aimed to develop a deep learning-based US CAD system (dCAD) for the diagnosis of thyroid nodules and compare its performance with those of a support vector machine (SVM)-based US CAD system (sCAD) and radiologists. dCAD was developed by using US images of 4919 thyroid nodules from three institutions. Its diagnostic performance was prospectively evaluated between June 2016 and February 2017 in 286 nodules, and was compared with those of sCAD and radiologists, using logistic regression with the generalized estimating equation. Subgroup analyses were performed according to experience level and separately for small thyroid nodules 1-2 cm. There was no difference in overall sensitivity, specificity, positive predictive value (PPV), negative predictive value and accuracy (all p > 0.05) between radiologists and dCAD. Radiologists and dCAD showed higher specificity, PPV, and accuracy than sCAD (all p < 0.001). In small nodules, experienced radiologists showed higher specificity, PPV and accuracy than sCAD (all p < 0.05). In conclusion, dCAD showed overall comparable diagnostic performance with radiologists and assessed thyroid nodules more effectively than sCAD, without loss of sensitivity.ope

    Differentiation of the Follicular Neoplasm on the Gray-Scale US by Image Selection Subsampling along with the Marginal Outline Using Convolutional Neural Network

    Get PDF
    We conducted differentiations between thyroid follicular adenoma and carcinoma for 8-bit bitmap ultrasonography (US) images utilizing a deep-learning approach. For the data sets, we gathered small-boxed selected images adjacent to the marginal outline of nodules and applied a convolutional neural network (CNN) to have differentiation, based on a statistical aggregation, that is, a decision by majority. From the implementation of the method, introducing a newly devised, scalable, parameterized normalization treatment, we observed meaningful aspects in various experiments, collecting evidence regarding the existence of features retained on the margin of thyroid nodules, such as 89.51% of the overall differentiation accuracy for the test data, with 93.19% of accuracy for benign adenoma and 71.05% for carcinoma, from 230 benign adenoma and 77 carcinoma US images, where we used only 39 benign adenomas and 39 carcinomas to train the CNN model, and, with these extremely small training data sets and their model, we tested 191 benign adenomas and 38 carcinomas. We present numerical results including area under receiver operating characteristic (AUROC).ope

    Automatic Detection of Thyroid Nodule Characteristics From 2D Ultrasound Images

    Get PDF
    Thyroid cancer is one of the common types of cancer worldwide, and Ultrasound (US) imaging is a modality normally used for thyroid cancer diagnostics. The American College of Radiology Thyroid Imaging Reporting and Data System (ACR TIRADS) has been widely adopted to identify and classify US image characteristics for thyroid nodules. This paper presents novel methods for detecting the characteristic descriptors derived from TIRADS. Our methods return descriptions of the nodule margin irregularity, margin smoothness, calcification as well as shape and echogenicity using conventional computer vision and deep learning techniques. We evaluate our methods using datasets of 471 US images of thyroid nodules acquired from US machines of different makes and labeled by multiple radiologists. The proposed methods achieved overall accuracies of 88.00%, 93.18%, and 89.13% in classifying nodule calcification, margin irregularity, and margin smoothness respectively. Further tests with limited data also show a promising overall accuracy of 90.60% for echogenicity and 100.00% for nodule shape. This study provides an automated annotation of thyroid nodule characteristics from 2D ultrasound images. The experimental results showed promising performance of our methods for thyroid nodule analysis. The automatic detection of correct characteristics not only offers supporting evidence for diagnosis, but also generates patient reports rapidly, thereby decreasing the workload of radiologists and enhancing productivity

    Risk Stratification of Thyroid Nodule: From Ultrasound Features to TIRADS

    Get PDF
    Since the 1990s, ultrasound (US) has played a major role in the assessment of thyroid nodules and their risk of malignancy. Over the last decade, the most eminent international societies have published US-based systems for the risk stratification of thyroid lesions, namely, Thyroid Imaging Reporting And Data Systems (TIRADSs). The introduction of TIRADSs into clinical practice has significantly increased the diagnostic power of US to a level approaching that of fine-needle aspiration cytology (FNAC). At present, we are probably approaching a new era in which US could be the primary tool to diagnose thyroid cancer. However, before using US in this new dominant role, we need further proof. This Special Issue, which includes reviews and original articles, aims to pave the way for the future in the field of thyroid US. Highly experienced thyroidologists focused on US are asked to contribute to achieve this goal

    Segmentation and classification of lung nodules from Thoracic CT scans : methods based on dictionary learning and deep convolutional neural networks.

    Get PDF
    Lung cancer is a leading cause of cancer death in the world. Key to survival of patients is early diagnosis. Studies have demonstrated that screening high risk patients with Low-dose Computed Tomography (CT) is invaluable for reducing morbidity and mortality. Computer Aided Diagnosis (CADx) systems can assist radiologists and care providers in reading and analyzing lung CT images to segment, classify, and keep track of nodules for signs of cancer. In this thesis, we propose a CADx system for this purpose. To predict lung nodule malignancy, we propose a new deep learning framework that combines Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to learn best in-plane and inter-slice visual features for diagnostic nodule classification. Since a nodule\u27s volumetric growth and shape variation over a period of time may reveal information regarding the malignancy of nodule, separately, a dictionary learning based approach is proposed to segment the nodule\u27s shape at two time points from two scans, one year apart. The output of a CNN classifier trained to learn visual appearance of malignant nodules is then combined with the derived measures of shape change and volumetric growth in assigning a probability of malignancy to the nodule. Due to the limited number of available CT scans of benign and malignant nodules in the image database from the National Lung Screening Trial (NLST), we chose to initially train a deep neural network on the larger LUNA16 Challenge database which was built for the purpose of eliminating false positives from detected nodules in thoracic CT scans. Discriminative features that were learned in this application were transferred to predict malignancy. The algorithm for segmenting nodule shapes in serial CT scans utilizes a sparse combination of training shapes (SCoTS). This algorithm captures a sparse representation of a shape in input data through a linear span of previously delineated shapes in a training repository. The model updates shape prior over level set iterations and captures variabilities in shapes by a sparse combination of the training data. The level set evolution is therefore driven by a data term as well as a term capturing valid prior shapes. During evolution, the shape prior influence is adjusted based on shape reconstruction, with the assigned weight determined from the degree of sparsity of the representation. The discriminative nature of sparse representation, affords us the opportunity to compare nodules\u27 variations in consecutive time points and to predict malignancy. Experimental validations of the proposed segmentation algorithm have been demonstrated on 542 3-D lung nodule data from the LIDC-IDRI database which includes radiologist delineated nodule boundaries. The effectiveness of the proposed deep learning and dictionary learning architectures for malignancy prediction have been demonstrated on CT data from 370 biopsied subjects collected from the NLST database. Each subject in this database had at least two serial CT scans at two separate time points one year apart. The proposed RNN CAD system achieved an ROC Area Under the Curve (AUC) of 0.87, when validated on CT data from nodules at second sequential time point and 0.83 based on dictionary learning method; however, when nodule shape change and appearance were combined, the classifier performance improved to AUC=0.89

    Automated Strategies in Multimodal and Multidimensional Ultrasound Image-based Diagnosis

    Get PDF
    Medical ultrasonography is an effective technique in traditional anatomical and functional diagnosis. However, it requires the visual examination by experienced clinicians, which is a laborious, time consuming and highly subjective procedure. Computer-aided diagnosis (CADx) have been extensively used in clinical practice to support the interpretation of images; nevertheless, current ultrasound CADx still entails a substantial user-dependency and are unable to extract image data for prediction modelling. The aim of this thesis is to propose a set of fully automated strategies to overcome the limitations of ultrasound CADx. These strategies are addressed to multiple modalities (B-Mode, Contrast-Enhanced Ultrasound-CEUS, Power Doppler-PDUS and Acoustic Angiography-AA) and dimensions (2-D and 3-D imaging). The enabling techniques presented in this work are designed, developed and quantitively validated to efficiently improve the overall patients’ diagnosis. This work is subdivided in 2 macro-sections: in the first part, two fully automated algorithms for the reliable quantification of 2-D B-Mode ultrasound skeletal muscle architecture and morphology are proposed. In the second part, two fully automated algorithms for the objective assessment and characterization of tumors’ vasculature in 3-D CEUS and PDUS thyroid tumors and preclinical AA cancer growth are presented. In the first part, the MUSA (Muscle UltraSound Analysis) algorithm is designed to measure the muscle thickness, the fascicles length and the pennation angle; the TRAMA (TRAnsversal Muscle Analysis) algorithm is proposed to extract and analyze the Visible Cross-Sectional Area (VCSA). MUSA and TRAMA algorithms have been validated on two datasets of 200 images; automatic measurements have been compared with expert operators’ manual measurements. A preliminary statistical analysis was performed to prove the ability of texture analysis on automatic VCSA in the distinction between healthy and pathological muscles. In the second part, quantitative assessment on tumor vasculature is proposed in two automated algorithms for the objective characterization of 3-D CEUS/Power Doppler thyroid nodules and the evolution study of fibrosarcoma invasion in preclinical 3-D AA imaging. Vasculature analysis relies on the quantification of architecture and vessels tortuosity. Vascular features obtained from CEUS and PDUS images of 20 thyroid nodules (10 benign, 10 malignant) have been used in a multivariate statistical analysis supported by histopathological results. Vasculature parametric maps of implanted fibrosarcoma are extracted from 8 rats investigated with 3-D AA along four time points (TPs), in control and tumors areas; results have been compared with manual previous findings in a longitudinal tumor growth study. Performance of MUSA and TRAMA algorithms results in 100% segmentation success rate. Absolute difference between manual and automatic measurements is below 2% for the muscle thickness and 4% for the VCSA (values between 5-10% are acceptable in clinical practice), suggesting that automatic and manual measurements can be used interchangeably. The texture features extraction on the automatic VCSAs reveals that texture descriptors can distinguish healthy from pathological muscles with a 100% success rate for all the four muscles. Vascular features extracted of 20 thyroid nodules in 3-D CEUS and PDUS volumes can be used to distinguish benign from malignant tumors with 100% success rate for both ultrasound techniques. Malignant tumors present higher values of architecture and tortuosity descriptors; 3-D CEUS and PDUS imaging present the same accuracy in the differentiation between benign and malignant nodules. Vascular parametric maps extracted from the 8 rats along the 4 TPs in 3-D AA imaging show that parameters extracted from the control area are statistically different compared to the ones within the tumor volume. Tumor angiogenetic vessels present a smaller diameter and higher tortuosity. Tumor evolution is characterized by the significant vascular trees growth and a constant value of vessel diameter along the four TPs, confirming the previous findings. In conclusion, the proposed automated strategies are highly performant in segmentation, features extraction, muscle disease detection and tumor vascular characterization. These techniques can be extended in the investigation of other organs, diseases and embedded in ultrasound CADx, providing a user-independent reliable diagnosis

    Learning Algorithms for Fat Quantification and Tumor Characterization

    Get PDF
    Obesity is one of the most prevalent health conditions. About 30% of the world\u27s and over 70% of the United States\u27 adult populations are either overweight or obese, causing an increased risk for cardiovascular diseases, diabetes, and certain types of cancer. Among all cancers, lung cancer is the leading cause of death, whereas pancreatic cancer has the poorest prognosis among all major cancers. Early diagnosis of these cancers can save lives. This dissertation contributes towards the development of computer-aided diagnosis tools in order to aid clinicians in establishing the quantitative relationship between obesity and cancers. With respect to obesity and metabolism, in the first part of the dissertation, we specifically focus on the segmentation and quantification of white and brown adipose tissue. For cancer diagnosis, we perform analysis on two important cases: lung cancer and Intraductal Papillary Mucinous Neoplasm (IPMN), a precursor to pancreatic cancer. This dissertation proposes an automatic body region detection method trained with only a single example. Then a new fat quantification approach is proposed which is based on geometric and appearance characteristics. For the segmentation of brown fat, a PET-guided CT co-segmentation method is presented. With different variants of Convolutional Neural Networks (CNN), supervised learning strategies are proposed for the automatic diagnosis of lung nodules and IPMN. In order to address the unavailability of a large number of labeled examples required for training, unsupervised learning approaches for cancer diagnosis without explicit labeling are proposed. We evaluate our proposed approaches (both supervised and unsupervised) on two different tumor diagnosis challenges: lung and pancreas with 1018 CT and 171 MRI scans respectively. The proposed segmentation, quantification and diagnosis approaches explore the important adiposity-cancer association and help pave the way towards improved diagnostic decision making in routine clinical practice
    corecore