9 research outputs found

    Editorial: Weakly supervised deep learning-based methods for brain image analysis

    Get PDF
    In recent years, deep learning-based methods have been widely used in the fields of brain image analysis and achieved excellent performance in many tasks, including image segmentation, image reconstruction, and disease classification (Shen et al., 2017). Most of the existing deep learning-based methods rely on large-scale datasets with highquality full annotations. However, it is usually time-consuming and requires rich expert experience to acquire such data. Moreover, because of individual differences in observer experience and understanding, large-scale and full annotated datasets may suffer from large intra- and inter-observer variability, which could hinder their application in brain image analysis. In contrast, weak yet low-cost annotations (such as coarse annotations, partial annotations, or small sample annotations) are much easier to collect than high-quality full detailed annotations. As a result, there is a strong desire for innovative deep learning-based methodologies that can efficiently learn from weakly-annotated data and achieve competitive performance compared with using full annotation data (Campanella et al., 2019)

    Dilated Dense U-Net for Infant Hippocampus Subfield Segmentation

    Get PDF
    Accurate and automatic segmentation of infant hippocampal subfields from magnetic resonance (MR) images is an important step for studying memory related infant neurological diseases. However, existing hippocampal subfield segmentation methods were generally designed based on adult subjects, and would compromise performance when applied to infant subjects due to insufficient tissue contrast and fast changing structural patterns of early hippocampal development. In this paper, we propose a new fully convolutional network (FCN) for infant hippocampal subfield segmentation by embedding the dilated dense network in the U-net, namely DUnet. The embedded dilated dense network can generate multi-scale features while keeping high spatial resolution, which is useful in fusing the low-level features in the contracting path with the high-level features in the expanding path. To further improve the performance, we group every pair of convolutional layers with one residual connection in the DUnet, and obtain the Residual DUnet (ResDUnet). Experimental results show that our proposed DUnet and ResDUnet improve the average Dice coefficient by 2.1 and 2.5% for infant hippocampal subfield segmentation, respectively, when compared with the classic 3D U-net. The results also demonstrate that our methods outperform other state-of-the-art methods

    A Splitting Collocation Method for Elliptic Interface Problems

    No full text

    Bloch-Type Spaces of Minimal Surfaces

    No full text
    We study Bloch-type spaces of minimal surfaces from the unit disk D into Rn and characterize them in terms of weighted Lipschitz functions. In addition, the boundedness of a composition operator Cϕ acting between two Bloch-type spaces is discussed

    Deep fusion of multi-modal features for brain tumor image segmentation

    No full text
    Accurate segmentation of pathological regions in brain magnetic resonance images (MRI) is essential for the diagnosis and treatment of brain tumors. Multi-modality MRIs, which offer diverse feature information, are commonly utilized in brain tumor image segmentation. Deep neural networks have become prevalent in this field; however, many approaches simply concatenate different modalities and input them directly into the neural network for segmentation, disregarding the unique characteristics and complementarity of each modality. In this study, we propose a brain tumor image segmentation method that leverages deep residual learning with multi-modality image feature fusion. Our approach involves extracting and fusing distinct and complementary features from various modalities, fully exploiting the multi-modality information within a deep convolutional neural network to enhance the performance of brain tumor image segmentation. We evaluate the effectiveness of our proposed method using the BraTS2021 dataset and demonstrate that deep residual learning with multi-modality image feature fusion significantly improves segmentation accuracy. Our method achieves competitive segmentation results, with Dice values of 83.3, 89.07, and 91.44 for enhanced tumor, tumor core, and whole tumor, respectively. These findings highlight the potential of our method in improving brain tumor diagnosis and treatment through accurate segmentation of pathological regions in brain MRIs

    Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract

    No full text
    Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in suboptimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves 74% and 85% mean intersection over union (mloU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract
    corecore