11 research outputs found

    Fetal brain tissue annotation and segmentation challenge results

    Get PDF
    In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero

    Fetal Brain Tissue Annotation and Segmentation Challenge Results

    Full text link
    In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, brainstem, deep grey matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.Comment: Results from FeTA Challenge 2021, held at MICCAI; Manuscript submitte

    EMTL-Net: Boosting segmentation quality in histopathology images of gland and nuclei by explainable multitask learning network as an optimized strategy

    No full text
    In spite of achieving human-level efficacy in gland and nuclei segmentation, modern deep learning-driven techniques still face challenges related to the loss of regional context information and disregard the importance of long-range semantic information during the optimization. Furthermore, one typical issue with CNN-based approaches is also their lack of clinical explainability. In such context, we propose an explainable multitask learning network (EMTL-Net) to enhance the segmentation quality of gland and nuclei segmentation via histopathology images. Initially, to obtain better feature maps, boost the network growth, and reduce the computational cost in the encoding path, we introduce the tweaked residual module (TRM) in which the diagnosis information of histopathology images structure will not be overlooked. After that, we design the two modules to address the challenges of regional context information, such as the complementary feature detection module (CFDM) for the intermediate layers and the cascade feature module (CFM) for the deeper layer of the encoder block. We also design a competitive decoder by casting an efficient pyramid split attention (EPSA) unit that preserves long-range semantic information, stimulates robust features, and dynamically recalibrates spatial and channel-wise features. Extensive experimental findings on real-world datasets, including Glas, MoNuSeg and TNBC, reveal that our proposed model has achieved a Dice Coefficient and Intersection over Union of 0.923 and 0.869 on Glas, 0.855 and 0.741 on TNBC, and 0.829 and 0.713 on MoNuSeg when compared with the SOTA approaches for gland and nuclei segmentation. In addition, the suggested EMTL-Net is also examined for paying full attention on the regions of the gland and nuclei to meet the needs of clinical scenarios

    Clustering analysis for the evolutionary relationships of SARS-CoV-2 strains

    No full text
    Abstract To explore the differences and relationships between the available SARS-CoV-2 strains and predict the potential evolutionary direction of these strains, we employ the hierarchical clustering analysis to investigate the evolutionary relationships between the SARS-CoV-2 strains utilizing the genomic sequences collected in China till January 7, 2023. We encode the sequences of the existing SARS-CoV-2 strains into numerical data through k-mer algorithm, then propose four methods to select the representative sample from each type of strains to comprise the dataset for clustering analysis. Three hierarchical clustering algorithms named Ward-Euclidean, Ward-Jaccard, and Average-Euclidean are introduced through combing the Euclidean and Jaccard distance with the Ward and Average linkage clustering algorithms embedded in the OriginPro software. Experimental results reveal that BF.28, BE.1.1.1, BA.5.3, and BA.5.6.4 strains exhibit distinct characteristics which are not observed in other types of SARS-CoV-2 strains, suggesting their being the majority potential sources which the future SARS-CoV-2 strains’ evolution from. Moreover, BA.2.75, CH.1.1, BA.2, BA.5.1.3, BF.7, and B.1.1.214 strains demonstrate enhanced abilities in terms of immune evasion, transmissibility, and pathogenicity. Hence, closely monitoring the evolutionary trends of these strains is crucial to mitigate their impact on public health and society as far as possible

    Head and neck tumor segmentation in PET/CT: The HECKTOR challenge

    No full text
    International audienceThis paper relates the post-analysis of the first edition of the HEad and neCK TumOR (HECKTOR) challenge. This challenge was held as a satellite event of the 23rd International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2020, and was the first of its kind focusing on lesion segmentation in combined FDG-PET and CT image modalities. The challenge's task is the automatic segmentation of the Gross Tumor Volume (GTV) of Head and Neck (HandN) oropharyngeal primary tumors in FDG-PET/CT images. To this end, the participants were given a training set of 201 cases from four different centers and their methods were tested on a held-out set of 53 cases from a fifth center. The methods were ranked according to the Dice Score Coefficient (DSC) averaged across all test cases. An additional inter-observer agreement study was organized to assess the difficulty of the task from a human perspective. 64 teams registered to the challenge, among which 10 provided a paper detailing their approach. The best method obtained an average DSC of 0.7591, showing a large improvement over our proposed baseline method and the inter-observer agreement, associated with DSCs of 0.6610 and 0.61, respectively. The automatic methods proved to successfully leverage the wealth of metabolic and structural properties of combined PET and CT modalities, significantly outperforming human inter-observer agreement level, semi-automatic thresholding based on PET images as well as other single modality-based methods. This promising performance is one step forward towards large-scale radiomics studies in HandN cancer, obviating the need for error-prone and time-consuming manual delineation of GTVs. (C) 2022 The Authors. Published by Elsevier B.V
    corecore