20 research outputs found
Hard exudates segmentation based on learned initial seeds and iterative graph cut
© 2018 Elsevier B.V. (Background and Objective): The occurrence of hard exudates is one of the early signs of diabetic retinopathy which is one of the leading causes of the blindness. Many patients with diabetic retinopathy lose their vision because of the late detection of the disease. Thus, this paper is to propose a novel method of hard exudates segmentation in retinal images in an automatic way. (Methods): The existing methods are based on either supervised or unsupervised learning techniques. In addition, the learned segmentation models may often cause miss-detection and/or fault-detection of hard exudates, due to the lack of rich characteristics, the intra-variations, and the similarity with other components in the retinal image. Thus, in this paper, the supervised learning based on the multilayer perceptron (MLP) is only used to identify initial seeds with high confidences to be hard exudates. Then, the segmentation is finalized by unsupervised learning based on the iterative graph cut (GC) using clusters of initial seeds. Also, in order to reduce color intra-variations of hard exudates in different retinal images, the color transfer (CT) is applied to normalize their color information, in the pre-processing step. (Results): The experiments and comparisons with the other existing methods are based on the two well-known datasets, e_ophtha EX and DIARETDB1. It can be seen that the proposed method outperforms the other existing methods in the literature, with the sensitivity in the pixel-level of 0.891 for the DIARETDB1 dataset and 0.564 for the e_ophtha EX dataset. The cross datasets validation where the training process is performed on one dataset and the testing process is performed on another dataset is also evaluated in this paper, in order to illustrate the robustness of the proposed method. (Conclusions): This newly proposed method integrates the supervised learning and unsupervised learning based techniques. It achieves the improved performance, when compared with the existing methods in the literature. The robustness of the proposed method for the scenario of cross datasets could enhance its practical usage. That is, the trained model could be more practical for unseen data in the real-world situation, especially when the capturing environments of training and testing images are not the same
Efficient and precise cell counting for RNAi screening of Orientia tsutsugamushi infection using deep learning techniques
Acquiring fluorescent scrub typhus images obtained through RNA interference screening for the analysis of 60 different human genes and 18 control genes poses challenges due to nonuniform or clumped cells and variations in image quality, rendering image processing (IP) counting inadequate. This study addresses three key questions concerning the application of deep learning methods to this dataset. Firstly, it explores the potential for object detection (OD) models to replace instance segmentation (IS) models in cell counting, striking a balance between accuracy and computational efficiency. Object detection models, including Faster R-CNN, You Only Look Once (YOLO), and Adaptive Training Sample Selection (ATSS) with reduced backbone sizes, outperform the instance segmentation model (Mask Region-Based Convolutional Neural Network: Mask R-CNN, Cascade Mask-RCNN) with both deep and shallow backbones. Notably, ATSS with Resnet-50 achieves an impressive mean average precision of 0.884 in just 33.1 milliseconds. Secondly, reducing the feature extractor size enhances cell counting efficiency, with OD models featuring reduced backbones demonstrating improved performance and faster processing. Finally, deep learning, especially OD models with shallow backbones, outperforms IP methods in both absolute and relative cell counting. This study demonstrates the potential for OD models to replace IS models, the efficiency gains achieved through the reduction of feature extractors, and the superiority of DL over IP in cell counting tasks
Speed meets accuracy: Advanced deep learning for efficient Orientia tsutsugamushi bacteria assessment in RNAi screening
This study investigates the use of advanced computer vision techniques for assessing the severity of Orientia tsutsugamushi bacterial infectivity. It uses fluorescent scrub typhus images obtained from molecular screening, and addresses challenges posed by a complex and extensive image dataset, with limited computational resources. Our methodology integrates three key strategies within a deep learning framework: transitioning from instance segmentation (IS) models to an object detection model; reducing the model's backbone size; and employing lower-precision floating-point calculations. These approaches were systematically evaluated to strike an optimal balance between model accuracy and inference speed, crucial for effective bacterial infectivity assessment. A significant outcome is that the implementation of the Faster R-CNN architecture, with a shallow backbone and reduced precision, notably improves accuracy and reduces inference time in cell counting and infectivity assessment. This innovative approach successfully addresses the limitations of image processing techniques and IS models, effectively bridging the gap between sophisticated computational methods and modern molecular biology applications. The findings underscore the potential of this integrated approach to enhance the accuracy and efficiency of bacterial infectivity evaluations in molecular research