3 research outputs found

    Classification of the Relationship Between Mandibular Third Molar and Inferior Alveolar Nerve Based on Generated Mask Images

    No full text
    In recent dentistry research, deep learning techniques have been employed for various tasks, including detecting and segmenting third molars and inferior alveolar nerves, as well as classifying their positional relationships. Prior studies using convolutional neural networks (CNNs) have successfully detected the adjacent area of the third molar and automatically classified the relationship between the inferior alveolar nerves. However, deep learning models have limitations in learning the diverse patterns of teeth and nerves due to variations in their shape, angle, and size across individuals. Moreover, unlike object classification, relationship classification is influenced by the proximity of teeth and nerves, making it challenging to accurately interpret the classified samples. To address these challenges, we propose a masking image-based classification system. The primary goal of this system is to enhance the classification performance of the relationship between the third molar and inferior alveolar nerve while providing diagnostic evidence to support the classification. Our proposed system operates by detecting the adjacent areas of the third molar, including the inferior alveolar nerve, in panoramic radiographs (PR). Subsequently, it generates masked images of the inferior alveolar nerve and third molar within the extracted regions of interest. Finally, it performs the classification of the relationship between the third molar and inferior alveolar nerve using these masked images. The system achieved a mean average precision (mAP) of 0.885 in detecting the region of interest in the third molar. Furthermore, the performance of the existing CNN-based positional relationship classification was evaluated using four classification models, resulting in an average accuracy of 0.795. For the segmentation task, the third molar and inferior alveolar nerve in the detected region of interest exhibited a dice similarity coefficient (DSC) of 0.961 and 0.820, respectively. Regarding the proposed masking image-based classification, it demonstrated an accuracy of 0.832, outperforming the existing method by approximately 3%, thus confirming the superiority of our proposed system

    Classification of Liver Fibrosis From Heterogeneous Ultrasound Image

    No full text
    With the advances in deep learning, including Convolutional Neural Networks (CNN), automated diagnosis technology using medical images has received considerable attention in medical science. In particular, in the field of ultrasound imaging, CNN trains the features of organs through an amount of image data, so that an expert-level automatic diagnosis is possible only with images of actual patients. However, CNN models are also trained on the features that reflect the inherent bias of the imaging machine used for image acquisition. In other words, when the domain of data used for training is different from that of data applied for an actual diagnosis, it is unclear whether consistent performance can be provided by the domain bias. Therefore, we investigate the effect of domain bias on the model with liver ultrasound imaging data obtained from multiple domains. We have constructed a dataset considering the manufacturer and the year of manufacturing of 8 ultrasound imaging machines. First, training and testing were performed by dividing the entire data, in a commonly used method. Second, we have utilized the training data constructed according to the number of domains for the machine learning process. Then we have measured and compared the performance on internal and external domain data. Through the above experiment, we have analyzed the effect of domains of data on model performance. We show that the performance scores evaluated with the internal domain data and the external domain data do not match. We especially show that the performance measured in the evaluation data including the internal domain was much higher than the performance measured in the evaluation data consisting of the external domain. We also show that 3-level classification performance is slightly improved over 5-level classification by mitigating class imbalance by integrating similar classes. The results highlight the need to develop a new methodology for mitigating the machine bias problem so that the model can work correctly even on external domain data, as opposed to the usual approach of constructing evaluation data in the same domain as the training data

    Automated classification of liver fibrosis stages using ultrasound imaging

    No full text
    Abstract Background Ultrasound imaging is the most frequently performed for the patients with chronic hepatitis or liver cirrhosis. However, ultrasound imaging is highly operator dependent and interpretation of ultrasound images is subjective, thus well-trained radiologist is required for evaluation. Automated classification of liver fibrosis could alleviate the shortage of skilled radiologist especially in low-to-middle income countries. The purposed of this study is to evaluate deep convolutional neural networks (DCNNs) for classifying the degree of liver fibrosis according to the METAVIR score using US images. Methods We used ultrasound (US) images from two tertiary university hospitals. A total of 7920 US images from 933 patients were used for training/validation of DCNNs. All patient were underwent liver biopsy or hepatectomy, and liver fibrosis was categorized based on pathology results using the METAVIR score. Five well-established DCNNs (VGGNet, ResNet, DenseNet, EfficientNet and ViT) was implemented to predict the METAVIR score. The performance of DCNNs for five-level (F0/F1/F2/F3/F4) classification was evaluated through area under the receiver operating characteristic curve (AUC) with 95% confidential interval, accuracy, sensitivity, specificity, positive and negative likelihood ratio. Results Similar mean AUC values were achieved for five models; VGGNet (0.96), ResNet (0.96), DenseNet (0.95), EfficientNet (0.96), and ViT (0.95). The same mean accuracy (0.94) and specificity values (0.96) were yielded for all models. In terms of sensitivity, EffcientNet achieved highest mean value (0.85) while the other models produced slightly lower values range from 0.82 to 0.84. Conclusion In this study, we demonstrated that DCNNs can classify the staging of liver fibrosis according to METAVIR score with high performance using conventional B-mode images. Among them, EfficientNET that have fewer parameters and computation cost produced highest performance. From the results, we believe that DCNNs based classification of liver fibrosis may allow fast and accurate diagnosis of liver fibrosis without needs of additional equipment for add-on test and may be powerful tool for supporting radiologists in clinical practice
    corecore