39 research outputs found

    Computer-aided diagnosis of lung nodule using gradient tree boosting and Bayesian optimization

    Full text link
    We aimed to evaluate computer-aided diagnosis (CADx) system for lung nodule classification focusing on (i) usefulness of gradient tree boosting (XGBoost) and (ii) effectiveness of parameter optimization using Bayesian optimization (Tree Parzen Estimator, TPE) and random search. 99 lung nodules (62 lung cancers and 37 benign lung nodules) were included from public databases of CT images. A variant of local binary pattern was used for calculating feature vectors. Support vector machine (SVM) or XGBoost was trained using the feature vectors and their labels. TPE or random search was used for parameter optimization of SVM and XGBoost. Leave-one-out cross-validation was used for optimizing and evaluating the performance of our CADx system. Performance was evaluated using area under the curve (AUC) of receiver operating characteristic analysis. AUC was calculated 10 times, and its average was obtained. The best averaged AUC of SVM and XGBoost were 0.850 and 0.896, respectively; both were obtained using TPE. XGBoost was generally superior to SVM. Optimal parameters for achieving high AUC were obtained with fewer numbers of trials when using TPE, compared with random search. In conclusion, XGBoost was better than SVM for classifying lung nodules. TPE was more efficient than random search for parameter optimization.Comment: 29 pages, 4 figure

    Temporal subtraction CT with nonrigid image registration improves detection of bone metastases by radiologists: results of a large-scale observer study

    Get PDF
    To determine whether temporal subtraction (TS) CT obtained with non-rigid image registration improves detection of various bone metastases during serial clinical follow-up examinations by numerous radiologists. Six board-certified radiologists retrospectively scrutinized CT images for patients with history of malignancy sequentially. These radiologists selected 50 positive and 50 negative subjects with and without bone metastases, respectively. Furthermore, for each subject, they selected a pair of previous and current CT images satisfying predefined criteria by consensus. Previous images were non-rigidly transformed to match current images and subtracted from current images to automatically generate TS images. Subsequently, 18 radiologists independently interpreted the 100 CT image pairs to identify bone metastases, both without and with TS images, with each interpretation separated from the other by an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Compared with interpretation without TS images, interpretation with TS images was associated with a significantly higher mean figure of merit (0.710 vs. 0.658; JAFROC analysis, P = 0.0027). Mean sensitivity at lesion-based was significantly higher for interpretation with TS compared with that without TS (46.1% vs. 33.9%; P = 0.003). Mean false positive count per subject was also significantly higher for interpretation with TS than for that without TS (0.28 vs. 0.15; P < 0.001). At the subject-based, mean sensitivity was significantly higher for interpretation with TS images than that without TS images (73.2% vs. 65.4%; P = 0.003). There was no significant difference in mean specificity (0.93 vs. 0.95; P = 0.083). TS significantly improved overall performance in the detection of various bone metastases

    Visceral fat obesity is the key risk factor for the development of reflux erosive esophagitis in 40–69-years subjects

    Get PDF
    [Background] Visceral fat obesity can be defined quantitatively by abdominal computed tomography, however, the usefulness of measuring visceral fat area to assess the etiology of gastrointestinal reflux disease has not been fully elucidated. [Methods] A total of 433 healthy subjects aged 40–69 years (234 men, 199 women) were included in the study. The relationship between obesity-related factors (total fat area, visceral fat area, subcutaneous fat area, waist circumference, and body mass index) and the incidence of reflux erosive esophagitis was investigated. Lifestyle factors and stomach conditions relevant to the onset of erosive esophagitis were also analyzed. [Results] The prevalence of reflux erosive esophagitis was 27.2% (118/433; 106 men, 12 women). Visceral fat area was higher in subjects with erosive esophagitis than in those without (116.6 cm2 vs. 64.9 cm2, respectively). The incidence of erosive esophagitis was higher in subjects with visceral fat obesity (visceral fat area ≥ 100 cm2) than in those without (61.2% vs. 12.8%, respectively). Visceral fat obesity had the highest odds ratio (OR) among obesity-related factors. Multivariate analysis showed that visceral fat area was associated with the incidence of erosive esophagitis (OR = 2.18), indicating that it is an independent risk factor for erosive esophagitis. In addition, daily alcohol intake (OR = 1.54), gastric atrophy open type (OR = 0.29), and never-smoking history (OR = 0.49) were also independently associated with the development of erosive esophagitis. [Conclusions] Visceral fat obesity is the key risk factor for the development of reflux erosive esophagitis in subjects aged 40–69 years

    コマ送り式ビューアによる臨床画像の読影でのコマ落ちの高速ビデオカメラを用いた評価

    Get PDF
    京都大学0048新制・課程博士博士(医学)甲第17419号医博第3762号新制||医||997(附属図書館)30185京都大学大学院医学研究科医学専攻(主査)教授 吉原 博幸, 教授 福山 秀直, 教授 平岡 眞寛学位規則第4条第1項該当Doctor of Medical ScienceKyoto UniversityDA

    Using a high-speed movie camera to evaluate slice dropping in clinical image interpretation with stack mode viewers.

    Get PDF
    The purpose of this study is to verify objectively the rate of slice omission during paging on picture archiving and communication system (PACS) viewers by recording the images shown on the computer displays of these viewers with a high-speed movie camera. This study was approved by the institutional review board. A sequential number from 1 to 250 was superimposed on each slice of a series of clinical Digital Imaging and Communication in Medicine (DICOM) data. The slices were displayed using several DICOM viewers, including in-house developed freeware and clinical PACS viewers. The freeware viewer and one of the clinical PACS viewers included functions to prevent slice dropping. The series was displayed in stack mode and paged in both automatic and manual paging modes. The display was recorded with a high-speed movie camera and played back at a slow speed to check whether slices were dropped. The paging speeds were also measured. With a paging speed faster than half the refresh rate of the display, some viewers dropped up to 52.4 % of the slices, while other well-designed viewers did not, if used with the correct settings. Slice dropping during paging was objectively confirmed using a high-speed movie camera. To prevent slice dropping, the viewer must be specially designed for the purpose and must be used with the correct settings, or the paging speed must be slower than half of the display refresh rate

    Computer-aided diagnosis of lung nodule classification between benign nodule, primary lung cancer, and metastatic lung cancer at different image size using deep convolutional neural network with transfer learning.

    No full text
    We developed a computer-aided diagnosis (CADx) method for classification between benign nodule, primary lung cancer, and metastatic lung cancer and evaluated the following: (i) the usefulness of the deep convolutional neural network (DCNN) for CADx of the ternary classification, compared with a conventional method (hand-crafted imaging feature plus machine learning), (ii) the effectiveness of transfer learning, and (iii) the effect of image size as the DCNN input. Among 1240 patients of previously-built database, computed tomography images and clinical information of 1236 patients were included. For the conventional method, CADx was performed by using rotation-invariant uniform-pattern local binary pattern on three orthogonal planes with a support vector machine. For the DCNN method, CADx was evaluated using the VGG-16 convolutional neural network with and without transfer learning, and hyperparameter optimization of the DCNN method was performed by random search. The best averaged validation accuracies of CADx were 55.9%, 68.0%, and 62.4% for the conventional method, the DCNN method with transfer learning, and the DCNN method without transfer learning, respectively. For image size of 56, 112, and 224, the best averaged validation accuracy for the DCNN with transfer learning were 60.7%, 64.7%, and 68.0%, respectively. DCNN was better than the conventional method for CADx, and the accuracy of DCNN improved when using transfer learning. Also, we found that larger image sizes as inputs to DCNN improved the accuracy of lung nodule classification

    Automated prediction of emphysema visual score using homology-based quantification of low-attenuation lung region

    Get PDF
    [Objective]: The purpose of this study was to investigate the relationship between visual score of emphysema and homology-based emphysema quantification (HEQ) and evaluate whether visual score was accurately predicted by machine learning and HEQ. [Materials and methods]: A total of 115 anonymized computed tomography images from 39 patients were obtained from a public database. Emphysema quantification of these images was performed by measuring the percentage of low-attenuation lung area (LAA%). The following values related to HEQ were obtained: nb0 and nb1. LAA% and HEQ were calculated at various threshold levels ranging from -1000 HU to -700 HU. Spearman's correlation coefficients between emphysema quantification and visual score were calculated at the various threshold levels. Visual score was predicted by machine learning and emphysema quantification (LAA% or HEQ). Random Forest was used as a machine learning algorithm, and accuracy of prediction was evaluated by leave-one-patient-out cross validation. The difference in the accuracy was assessed using McNemar's test. [Results]: The correlation coefficients between emphysema quantification and visual score were as follows: LAA% (-950 HU), 0.567; LAA% (-910 HU), 0.654; LAA% (-875 HU), 0.704; nb0 (-950 HU), 0.552; nb0 (-910 HU), 0.629; nb0 (-875 HU), 0.473; nb1 (-950 HU), 0.149; nb1 (-910 HU), 0.519; and nb1 (-875 HU), 0.716. The accuracy of prediction was as follows: LAA%, 55.7% and HEQ, 66.1%. The difference in accuracy was statistically significant (p = 0.0290). [Conclusion]: LAA% and HEQ at -875 HU showed a stronger correlation with visual score than those at -910 or -950 HU. HEQ was more useful than LAA% for predicting visual score

    Summary of visual score in the 115 CT slices.

    No full text
    <p>Note: Visual score was based on the following criteria: 0, no emphysema; 1, minimal; 2, mild; 3, moderate; 4, severe; and 5, very severe emphysema.</p

    Development and Evaluation of a Low-Cost and High-Capacity DICOM Image Data Storage System for Research

    No full text
    Thin-slice CT data, useful for clinical diagnosis and research, is now widely available but is typically discarded in many institutions, after a short period of time due to data storage capacity limitations. We designed and built a low-cost high-capacity Digital Imaging and COmmunication in Medicine (DICOM) storage system able to store thin-slice image data for years, using off-the-shelf consumer hardware components, such as a Macintosh computer, a Windows PC, and network-attached storage units. “Ordinary” hierarchical file systems, instead of a centralized data management system such as relational database, were adopted to manage patient DICOM files by arranging them in directories enabling quick and easy access to the DICOM files of each study by following the directory trees with Windows Explorer via study date and patient ID. Software used for this system was open-source OsiriX and additional programs we developed ourselves, both of which were freely available via the Internet. The initial cost of this system was about 3,600withanincrementalstoragecostofabout3,600 with an incremental storage cost of about 900 per 1 terabyte (TB). This system has been running since 7th Feb 2008 with the data stored increasing at the rate of about 1.3 TB per month. Total data stored was 21.3 TB on 23rd June 2009. The maintenance workload was found to be about 30 to 60 min once every 2 weeks. In conclusion, this newly developed DICOM storage system is useful for research due to its cost-effectiveness, enormous capacity, high scalability, sufficient reliability, and easy data access
    corecore