7 research outputs found

    Semi-Supervised Semantic Segmentation Methods for UW-OCTA Diabetic Retinopathy Grade Assessment

    Full text link
    People with diabetes are more likely to develop diabetic retinopathy (DR) than healthy people. However, DR is the leading cause of blindness. At present, the diagnosis of diabetic retinopathy mainly relies on the experienced clinician to recognize the fine features in color fundus images. This is a time-consuming task. Therefore, in this paper, to promote the development of UW-OCTA DR automatic detection, we propose a novel semi-supervised semantic segmentation method for UW-OCTA DR image grade assessment. This method, first, uses the MAE algorithm to perform semi-supervised pre-training on the UW-OCTA DR grade assessment dataset to mine the supervised information in the UW-OCTA images, thereby alleviating the need for labeled data. Secondly, to more fully mine the lesion features of each region in the UW-OCTA image, this paper constructs a cross-algorithm ensemble DR tissue segmentation algorithm by deploying three algorithms with different visual feature processing strategies. The algorithm contains three sub-algorithms, namely pre-trained MAE, ConvNeXt, and SegFormer. Based on the initials of these three sub-algorithms, the algorithm can be named MCS-DRNet. Finally, we use the MCS-DRNet algorithm as an inspector to check and revise the results of the preliminary evaluation of the DR grade evaluation algorithm. The experimental results show that the mean dice similarity coefficient of MCS-DRNet v1 and v2 are 0.5161 and 0.5544, respectively. The quadratic weighted kappa of the DR grading evaluation is 0.7559. Our code will be released soon

    SwinUNeLCsT: Global–local spatial representation learning with hybrid CNN–transformer for efficient tuberculosis lung cavity weakly supervised semantic segmentation

    Get PDF
    Radiological diagnosis of lung cavities (LCs) is the key to identifying tuberculosis (TB). Conventional deep learning methods rely on a large amount of accurate pixel-level data to segment LCs. This process is timeconsuming and laborious, especially for those subtle LCs. To address such challenges, firstly, we introduce a novel 3D TB LCs imaging convolutional neural network (CNN)-transformer hybrid model (SwinUNeLCsT). The core idea of SwinUNeLCsT is to combine local details and global dependencies for TB CT scan image feature representation to effectively improve the recognition ability of LCs. Secondly, to reduce the dependence on accurate pixel-level annotations, we design an end-to-end LCs weakly supervised semantic segmentation (WSSS) framework. Through this framework, radiologists need only to classify the number and the approximate location (e.g., left lung, right lung, or both) of LCs in the CT scan to achieve efficient segmentation of the LCs. This process eliminates the need for meticulously drawing boundaries, greatly reducing the cost of annotation. Extensive experimental results show that SwinUNeLCsT outperforms currently popular medical 3D segmentation methods in the supervised semantic segmentation paradigm. Meanwhile, our WSSS framework based on SwinUNeLCsT also performs best among the existing state-of-the-art medical 3D WSSS methods

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Segmentation of pulmonary cavity in lung CT scan for tuberculosis disease

    No full text
    The complexity of pulmonary tuberculosis (TB) lung cavity lesion features significantly increase the cost of semantic segmentation and labelling. However, the high cost of semantic segmentation has limited the development of TB automatic recognition to some extent. To address this issue, we developed an algorithm that automatically generates a semantic segmentation mask of TB from the TB target detection boundary box. Pulmonologists only need to identify and label the location of TB, and the algorithm can automatically generate the semantic segmentation mask of TB lesions in the labelled area. The algorithm, first, calculates the optimal threshold for separating the lesion from the background region. Then, based on this threshold, the lesion tissue within the bounding box is extracted and forms a mask that can be used for semantic segmentation tasks. Finally, we use the generated TB semantic segmentation mask to train Unet and Vnet models to verify the effectiveness of the algorithm. The experimental results demonstrate that Unet and Vnet achieve mean Dice coefficients of 0.612 and 0.637, respectively, in identifying TB lesion tissue

    DeepPulmoTB: A benchmark dataset for multi-task learning of tuberculosis lesions in lung computerized tomography (CT)

    No full text
    Tuberculosis (TB) remains a significant global health challenge, characterized by high incidence and mortality rates on a global scale. With the rapid advancement of computer-aided diagnosis (CAD) tools in recent years, CAD has assumed an increasingly crucial role in supporting TB diagnosis. Nonetheless, the development of CAD for TB diagnosis heavily relies on well-annotated computerized tomography (CT) datasets. Currently, the available annotations in TB CT datasets are still limited, which in turn restricts the development of CAD tools for TB diagnosis to some extent. To address this limitation, we introduce DeepPulmoTB, a CT multi-task learning dataset explicitly designed for TB diagnosis. To demonstrate the advantages of DeepPulmoTB, we propose a novel multi-task learning model, DeepPulmoTBNet (DPTBNet), for the joint segmentation and classification of lesion tissues in CT images. The architecture of DPTBNet comprises two subnets: SwinUnetR for the segmentation task, and a lightweight multi-scale network for the classification task. Furthermore, to enhance the model's capacity to capture TB lesion features, we introduce an improved iterative optimization algorithm that refines feature maps by integrating probability maps obtained in previous iterations. Extensive experiments validate the effectiveness of DPTBNet and the practicality of the DeepPulmoTB dataset
    corecore