6 research outputs found
Detecting and classifying lesions in mammograms with Deep Learning
In the last two decades Computer Aided Diagnostics (CAD) systems were
developed to help radiologists analyze screening mammograms. The benefits of
current CAD technologies appear to be contradictory and they should be improved
to be ultimately considered useful. Since 2012 deep convolutional neural
networks (CNN) have been a tremendous success in image recognition, reaching
human performance. These methods have greatly surpassed the traditional
approaches, which are similar to currently used CAD solutions. Deep CNN-s have
the potential to revolutionize medical image analysis. We propose a CAD system
based on one of the most successful object detection frameworks, Faster R-CNN.
The system detects and classifies malignant or benign lesions on a mammogram
without any human intervention. The proposed method sets the state of the art
classification performance on the public INbreast database, AUC = 0.95 . The
approach described here has achieved the 2nd place in the Digital Mammography
DREAM Challenge with AUC = 0.85 . When used as a detector, the system reaches
high sensitivity with very few false positive marks per image on the INbreast
dataset. Source code, the trained model and an OsiriX plugin are availaible
online at https://github.com/riblidezso/frcnn_cad
Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography: Comparison With 101 Radiologists.
BACKGROUND: Artificial intelligence (AI) systems performing at radiologist-like levels in the evaluation of digital mammography (DM) would improve breast cancer screening accuracy and efficiency. We aimed to compare the stand-alone performance of an AI system to that of radiologists in detecting breast cancer in DM. METHODS: Nine multi-reader, multi-case study datasets previously used for different research purposes in seven countries were collected. Each dataset consisted of DM exams acquired with systems from four different vendors, multiple radiologists' assessments per exam, and ground truth verified by histopathological analysis or follow-up, yielding a total of 2652 exams (653 malignant) and interpretations by 101 radiologists (28â296 independent interpretations). An AI system analyzed these exams yielding a level of suspicion of cancer present between 1 and 10. The detection performance between the radiologists and the AI system was compared using a noninferiority null hypothesis at a margin of 0.05. RESULTS: The performance of the AI system was statistically noninferior to that of the average of the 101 radiologists. The AI system had a 0.840 (95% confidence interval [CI] = 0.820 to 0.860) area under the ROC curve and the average of the radiologists was 0.814 (95% CI = 0.787 to 0.841) (difference 95% CI = -0.003 to 0.055). The AI system had an AUC higher than 61.4% of the radiologists. CONCLUSIONS: The evaluated AI system achieved a cancer detection accuracy comparable to an average breast radiologist in this retrospective setting. Although promising, the performance and impact of such a system in a screening setting needs further investigation
A Method of Using Information Entropy of an Image as an Effective Feature for Computer-Aided Diagnostic Applications
Abstract Computer-aided detection and diagnosis (CAD) systems are increasingly being used as an aid by clinicians for detection and interpretation of diseases. In general, a CAD system employs a classifier to detect or distinguish between abnormal and normal tissues on images. In the phase of classification, a set of image features and/or texture features extracted from the images are commonly used. In this article, we investigated the characteristic of the output entropy of an image and demonstrated the usefulness of the output entropy acting as a texture feature in CAD systems. In order to validate the effectiveness and superiority of the output-entropy-based texture feature, two well-known texture features, i.e., mean and standard deviation were used for comparison. The database used in this study comprised 50 CT images obtained from 10 patients with pulmonary nodules, and 50 CT images obtained from 5 normal subjects. We used a support vector machine for classification. A leave-one-out method was employed for training and classification. Three combinations of texture features, i.e., mean and entropy, standard deviation and entropy, and standard deviation and mean were used as the inputs to the classifier. Three different regions of interest (ROI) sizes, i.e., 11 Ã 11, 9 Ã 9 and 7 Ã 7 pixels from the database were selected for computation of the feature values. Our experimental results show that the combination of entropy and standard deviation is significantly better than both the combination of mean and entropy and that of standard deviation and mean in the case of the ROI size of 11 Ã 11 pixels (p < 0.05). These results suggest that information entropy of an image can be used as an effective feature for CAD applications. E. Matsuyama et al. 31
Use of normal tissue context in computer-aided detection of masses in mammograms.
Contains fulltext :
81026.pdf (publisher's version ) (Closed access)When reading mammograms, radiologists do not only look at local properties of suspicious regions but also take into account more general contextual information. This suggests that context may be used to improve the performance of computer-aided detection (CAD) of malignant masses in mammograms. In this study, we developed a set of context features that represent suspiciousness of normal tissue in the same case. For each candidate mass region, three normal reference areas were defined in the image at hand. Corresponding areas were also defined in the contralateral image and in different projections. Evaluation of the context features was done using 10-fold cross validation and case based bootstrapping. Free response receiver operating characteristic (FROC) curves were computed for feature sets including context features and a feature set without context. Results show that the mean sensitivity in the interval of 0.05-0.5 false positives/image increased more than 6% when context features were added. This increase was significant ( p < 0.0001). Context computed using multiple views yielded a better performance than using a single view (mean sensitivity increase of 2.9%, p < 0.0001). Besides the importance of using multiple views, results show that best CAD performance was obtained when multiple context features were combined that are based on different reference areas in the mammogram
Study on CAD Systems for Detection of Colorectal Cancers and Survival Prediction of Patients with Lung Diseases from CT images based on Deep Learning
è¿å¹ŽãCTæ®åœ±è£
眮ã®é«æ§èœåã«äŒŽããé«ç²ŸçŽ°ãªç»åãçæéã«ãããŠå€§éã«ååŸãå¯èœãšãªã£ãããã®ããšãããå»çã«ãããŠç»å蚺æã®éèŠã¯å¢å ããŠãããç»åã®èªåœ±ã«ããæ§ã
ãªç
æ°ã®èšºæãç
å€ã®æ€åºãå¯èœãšãªã£ããäžæ¹ãèªåœ±ãè¡ãå»åž«ãžã®è² æ
ãå¢å ããŠãããèªåœ±ç²ŸåºŠã«å¯Ÿããæªåœ±é¿ãæžå¿µãããŠãããäžè¿°ã®èæ¯ãããå»åž«ã®èªåœ±è£å©ãç®çãšãããã³ã³ãã¥ãŒã¿æ¯æŽèšºæ/æ€åº(CAD: Computer Aided Diagnosis/Detection)ã·ã¹ãã ã®éçºãçãã«è¡ãããŠãããCADã·ã¹ãã ã¯ãèšç®æ©ã«ããç»å解æã«ããç
æ°ã®èšºæããããã¯ãç
å€ã®æ€åºãè¡ããå»åž«ã«å¯Ÿãã第äºã®æèŠããšããŠçµæã®æ瀺ãè¡ãããšã«ãã蚺æã®è£å©ãè¡ããæ¬è«æã§ã¯ãCTç»åã察象ãšãã倧è
žããæ€åºãšãèºçŸæ£æ£è
ã®äºåŸäºæž¬ã®ããã®CADã·ã¹ãã ã®éçºãç®çãšããããšããã§ãè¿å¹Žã深局åŠç¿ã®å°é ã«ããã深局åŠç¿ã«ä»£è¡šãããAIã®å»çãžã®å¿çšã«å€å€§ãªé¢å¿ãéãŸã£ãŠããã深局åŠç¿ã¯æ¢åã®ææ³ã«ãããæ°ã
ã®åé¡ã®è§£æ±ºãè¡ã£ãŠãããCADã·ã¹ãã ã«ãããŠäž»ããåé¡ã§ãããç
å€ã®èšºæãããã³ãç
å€ã®æ€åºã«å¯Ÿãæå¹ã§ãããšèãããããããã§æ¬è«æã§ã¯ã深局åŠç¿ã«åºã¥ãæ°ããªã¢ãã«ãææ¡ããæ¢åã®CADã·ã¹ãã ã«ãããåé¡ã®è§£æ±ºãè¡ãã倧è
žããæ€åºCADã·ã¹ãã ã«é¢ããæ¢åã®CADã·ã¹ãã ã¯ãé«æ床ã§ããäžæ¹ãåœéœæ§é°åœ±ãå€ãæ€åºããåŸåã«ãããããã§æ¬è«æã§ã¯ãæ¢åã®CADã·ã¹ãã ã«ããæ€åºçµæã«å¯Ÿãã2段éã®åé¡ãè¡ãæ°ããåé¡ã¢ãã«ã®ææ¡ãè¡ããåœéœæ§é°åœ±ã®äœæžãå³ããææ¡ã¢ãã«ã¯ã3次å
ç³ã¿èŸŒã¿ã«ãã察象ã®ç©ºéçãªç¹åŸŽãæããããšãå¯èœãšããã¢ã³ãµã³ãã«ææ³ãçšããããšã«ããããµã€ãºå€åã«å¯Ÿããé å¥æ§ã®ç²åŸãå¯èœãšããããŸãã深局åŠç¿ã¢ãã«ã®åŠç¿ã«ã¯å€§èŠæš¡ãªã¢ãã«ãæ±ãããããããããŒã¿ã»ããäžè¶³ãåé¡ãšãªããããã§ãFlowã«åºã¥ãçæã¢ãã«ã«ãããããå¹æçãªç䌌ããŒã¿çæãå¯èœãªææ³ãææ¡ããããŒã¿ã»ããã®æ¡åŒµãè¡ããææ¡ææ³ã«ããæ¡åŒµãããããŒã¿ã»ãããçšããäžèšã®åé¡ã¢ãã«ã®æ§èœæ¹åãè¡ããäžæ¹ãèºçŸæ£æ£è
ã®äºåŸäºæž¬ã«ãããŠã¯ãäºåŸäºæž¬ã«åããæ§ã
ãªãã€ãªããŒã«ãŒãææ¡ãããŠããããç»å解æã«åºã¥ãæå¹ãªäºåŸäºæž¬ãã€ãªããŒã«ãŒã¯æªã ææ¡ãããŠããªããããã§æ¬è«æã§ã¯ã深局åŠç¿ã«åºã¥ãã»ã°ã¡ã³ããŒã·ã§ã³ã¢ãã«ã§ãããU-NetãããåŸãããç»åç¹åŸŽé: U-RadiomicsããèºçŸæ£æ£è
ã®äºåŸäºæž¬ãã€ãªããŒã«ãŒãšããŠå©çšãããã®äºæž¬æ§èœã«é¢ããæ¯èŒå®éšãè¡ãããŸããæµå¯ŸåŠç¿ã«åºã¥ããç»å解æã«ããæ£è
ã®çåæéååžã®ã¢ããªã³ã°ãå¯èœãšããã¢ãã«ãææ¡ããç»åããçŽæ¥çã«æ£è
ã®çåæéã®æšå®ãè¡ãææ³ãææ¡ããã第1ã®ç»å解æææ³ãšããŠãæ¢åã®å€§è
žããªãŒãæ€åºã·ã¹ãã ã«ããæ€åºãããåœéœæ§é°åœ±ã®äœæžã®ããã3次å
深局ç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ã«ããåé¡ã¢ãã«: E3D-ResNetãE3D-DenseNetã®ææ¡ãè¡ããææ¡ã¢ãã«ãšæ¯èŒã¢ãã«ã«ããåé¡æ§èœã®æ¯èŒãè¡ãããšã«ãããææ¡ã¢ãã«ã®æå¹æ§ã瀺ãããææ¡ã¢ãã«ã§ããE3D-ResNetã¯ã3次å
ç³ã¿èŸŒã¿ã«ãã察象ã®ç©ºéçãªç¹åŸŽãããå¹ççã«æããã¢ã³ãµã³ãã«ææ³ãçšããããšã«ããããµã€ãºå€åã«å¯Ÿããé å¥æ§ã®ç²åŸãå¯èœãšããããŸããE3D-DenseNetã¯Dense Blockã«ãããå±æçãªæ¿åºŠå€åãããå¹æçã«æããããšãå¯èœãšãããããã«ãå»çšç»ååéã«ããã代衚çãªåé¡ã§ããããŒã¿äžè¶³ã«å¯Ÿãã解決æ段ãšããŠãFlowã«åºã¥ãçæã¢ãã«ã§ããGlowã®3次å
æ¡åŒµãè¡ã£ã3D-Glowãææ¡ãã3D-Glowã«ããç
å€ããŒã¿ã®æ¡åŒµæ³ãææ¡ããã3D-Glowã«ããçæã§ã¯ã2ã€ã®åç
§ããªãŒãããã®ç䌌ããªãŒãã®çæãå¯èœã§ãããæ¢åã®ææ³ãšæ¯èŒããå®éããããªãŒããšé¡äŒŒããæ§é ãä¿ã¡ãããå€æ§ãªæ§é ãæã€ç䌌ããªãŒãã®çæãå¯èœãšããã第2ã®ç»å解æææ³ãšããŠãèºçŸæ£æ£è
ã®äºåŸäºæž¬ã®ããã®CADã·ã¹ãã ã®éçºãè¡ã£ããäºåŸäºæž¬ã®ãã€ãªããŒã«ãŒã¯æ£è
ã®æ§å¥ã幎霢ãèºæ©èœæ€æ»çµæçã«ããç®åºããããã®ãäž»ãšããŠçšããããŠãããç»å解æã«åºã¥ããã€ãªããŒã«ãŒã¯ææ¡ãããŠããªãã£ããæ¬è«æã§ã¯ãç»å解æã«åºã¥ããã€ãªããŒã«ãŒã§ãããU-Radiomicsããææ¡ããU-RadiomicsãçšããäºåŸäºæž¬ã¢ãã«ã®ææ¡ãè¡ã£ããU-Radiomicsãšä»ã®æ¢åãã€ãªããŒã«ãŒãšã®æ¯èŒã«ãããU-Radiomicsã¯ãé質æ§èºçŸæ£æ£è
ã«å¯Ÿããç»å解æã«åºã¥ãåªãããã€ãªããŒã«ãŒã«æãåŸãããšã瀺ãããããŸããç»å解æã«ããçåæéã®çŽæ¥çãªæšå®ãå¯èœãšããæ°ããªäºæž¬ã¢ãã«ã§ãããpix2survãã®ææ¡ãè¡ã£ããæµå¯Ÿççæã¢ãã«ãçšããææ³ã§ãããDATEãã¯ãæ§å¥ã幎霢ãèºæ©èœæ€æ»çµæãæœåšè¡šçŸãšããŠçšãã察象ãšããæ£è
ã®çåæéååžã®ã¢ããªã³ã°ãè¡ããäžæ¹ãpix2survã¯ãç»åããåŸãããç¹åŸŽéãæœåšè¡šçŸãšããŠçšãããããCTç»å以å€ã®æ
å ±ãå¿
èŠãšããªãç¹ã«ãããŠDATEãããåªããŠãããæ¯èŒå®éšãããpix2survã¯åªããçåæéäºæž¬ã¢ãã«ã§ãããšèšããã以äžãããææ¡ããç»å解æææ³ã¯å»åž«ã®CTç»åèªåœ±ã®è£å©ãç®çãšããCADã·ã¹ãã ã«å¯Ÿãæå¹ãªææ³ã§ããããšã瀺ããããä¹å·å·¥æ¥å€§åŠå士åŠäœè«æ åŠäœèšçªå·ïŒå·¥åç²ç¬¬514å· åŠäœæäžå¹Žææ¥ïŒä»€å3幎3æ25æ¥ç¬¬1ç« åºè«|第2ç« 2.5/3次å
ç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ãçšããè
¹éšCTç»åäžã®åœéœæ§é°åœ±ã®äœæž|第3ç« ããŒã¿æ¡åŒµã«ããåœéœæ§é°åœ±åé¡ã¢ãã«ã®åé¡æ§èœæ¹å|第4ç« U-Radiomicsãçšããé質æ§èºçŸæ£æ£è
ã®CTç»å解æã«ããäºåŸäºæž¬|第5ç« æµå¯Ÿççæãããã¯ãŒã¯ãçšããé質æ§èºçŸæ£æ£è
ã®çåæéäºæž¬|第6ç« èå¯|第7ç« çµè«ä¹å·å·¥æ¥å€§åŠä»€å2幎