254 research outputs found

    CTransNet: Convolutional Neural Network Combined with Transformer for Medical Image Segmentation

    Get PDF
    The Transformer has been widely used for many tasks in NLP before, but there is still much room to explore the application of the Transformer to the image domain. In this paper, we propose a simple and efficient hybrid Transformer framework, CTransNet, which combines self-attention and CNN to improve medical image segmentation performance. Capturing long-range dependencies at different scales. To this end, this paper proposes an effective self-attention mechanism incorporating relative position information encoding, which can reduce the time complexity of self-attention from O(n2) to O(n), and a new self-attention decoder that can recover fine-grained features in encoder from skip connection. This paper aims to address the current dilemma of Transformer applications: i.e., the need to learn induction bias from large amounts of training data. The hybrid layer in CTransNet allows the Transformer to be initialized as a CNN without pre-training. We have evaluated the performance of CTransNet on several medical segmentation datasets. CTransNet shows superior segmentation performance, robustness, and great promise for generalization to other medical image segmentation tasks

    RGN-Net: A Global Contextual and Multiscale Information Association Network for Medical Image Segmentation

    Get PDF
    Segmentation of medical images is a necessity for the development of healthcare systems, particularly for illness diagnosis and treatment planning. Recently, convolutional neural networks (CNNs) have gained amazing success in automatically segmenting medical images to identify organs or lesions. However, the majority of these approaches are incapable of segmenting objects of varying sizes and training on tiny, skewed datasets, both of which are typical in biomedical applications. Existing solutions use multi-scale fusion strategies to handle the difficulties posed by varying sizes, but they often employ complicated models more suited to broad semantic segmentation computer vision issues. In this research, we present an end-to-end dual-branch split architecture RGN-Net that takes the benefits of the two networks into greater account. Our technique may successfully create long-term functional relationships and collect global context data. Experiments on Lung, MoNuSeg, and DRIVE reveal that our technique reaches state-of-the-art benchmarks in order to evaluate the performance of RGN-Net

    Robust Outlier Detection Method Based on Local Entropy and Global Density

    Full text link
    By now, most outlier-detection algorithms struggle to accurately detect both point anomalies and cluster anomalies simultaneously. Furthermore, a few K-nearest-neighbor-based anomaly-detection methods exhibit excellent performance on many datasets, but their sensitivity to the value of K is a critical issue that needs to be addressed. To address these challenges, we propose a novel robust anomaly detection method, called Entropy Density Ratio Outlier Detection (EDROD). This method incorporates the probability density of each sample as the global feature, and the local entropy around each sample as the local feature, to obtain a comprehensive indicator of abnormality for each sample, which is called Entropy Density Ratio (EDR) for short in this paper. By comparing several competing anomaly detection methods on both synthetic and real-world datasets, it is found that the EDROD method can detect both point anomalies and cluster anomalies simultaneously with accurate performance. In addition, it is also found that the EDROD method exhibits strong robustness to the number of selected neighboring samples, the dimension of samples in the dataset, and the size of the dataset. Therefore, the proposed EDROD method can be applied to a variety of real-world datasets to detect anomalies with accurate and robust performances

    Task Decomposition and Synchronization for Semantic Biomedical Image Segmentation

    Full text link
    Semantic segmentation is essentially important to biomedical image analysis. Many recent works mainly focus on integrating the Fully Convolutional Network (FCN) architecture with sophisticated convolution implementation and deep supervision. In this paper, we propose to decompose the single segmentation task into three subsequent sub-tasks, including (1) pixel-wise image segmentation, (2) prediction of the class labels of the objects within the image, and (3) classification of the scene the image belonging to. While these three sub-tasks are trained to optimize their individual loss functions of different perceptual levels, we propose to let them interact by the task-task context ensemble. Moreover, we propose a novel sync-regularization to penalize the deviation between the outputs of the pixel-wise segmentation and the class prediction tasks. These effective regularizations help FCN utilize context information comprehensively and attain accurate semantic segmentation, even though the number of the images for training may be limited in many biomedical applications. We have successfully applied our framework to three diverse 2D/3D medical image datasets, including Robotic Scene Segmentation Challenge 18 (ROBOT18), Brain Tumor Segmentation Challenge 18 (BRATS18), and Retinal Fundus Glaucoma Challenge (REFUGE18). We have achieved top-tier performance in all three challenges.Comment: IEEE Transactions on Medical Imagin

    Identification of novel pathways and immune profiles related to sarcopenia

    Get PDF
    IntroductionSarcopenia is a progressive deterioration of skeletal muscle mass strength and function.MethodsTo uncover the underlying cellular and biological mechanisms, we studied the association between sarcopenia's three stages and the patient's ethnicity, identified a gene regulatory network based on motif enrichment in the upregulated gene set of sarcopenia, and compared the immunological landscape among sarcopenia stages.ResultsWe found that sarcopenia (S) was associated with GnRH, neurotrophin, Rap1, Ras, and p53 signaling pathways. Low muscle mass (LMM) patients showed activated pathways of VEGF signaling, B-cell receptor signaling, ErbB signaling, and T-cell receptor signaling. Low muscle mass and physical performance (LMM_LP) patients showed lower enrichment scores in B-cell receptor signaling, apoptosis, HIF-1 signaling, and the adaptive immune response pathways. Five common genes among DEGs and the elastic net regression model, TTC39DP, SLURP1, LCE1C, PTCD2P1, and OR7E109P, were expressed between S patients and healthy controls. SLURP1 and LCE1C showed the highest expression levels among sarcopenic Chinese descent than Caucasians and Afro-Caribbeans. Gene regulatory analysis of top upregulated genes in S patients yielded a top-scoring regulon containing GATA1, GATA2, and GATA3 as master regulators and nine predicted direct target genes. Two genes were associated with locomotion: POSTN and SLURP1. TTC39DP upregulation was associated with a better prognosis and stronger immune profile in S patients. The upregulation of SLURP1 and LCE1C was associated with a worse prognosis and weaker immune profile.ConclusionThis study provides new insight into sarcopenia's cellular and immunological prospects and evaluates the age and sarcopenia-related modifications of skeletal muscle

    Using Generalized Procrustes Analysis (GPA) for normalization of cDNA microarray data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Normalization is essential in dual-labelled microarray data analysis to remove non-biological variations and systematic biases. Many normalization methods have been used to remove such biases within slides (Global, Lowess) and across slides (Scale, Quantile and VSN). However, all these popular approaches have critical assumptions about data distribution, which is often not valid in practice.</p> <p>Results</p> <p>In this study, we propose a novel assumption-free normalization method based on the Generalized Procrustes Analysis (GPA) algorithm. Using experimental and simulated normal microarray data and boutique array data, we systemically evaluate the ability of the GPA method in normalization compared with six other popular normalization methods including Global, Lowess, Scale, Quantile, VSN, and one boutique array-specific housekeeping gene method. The assessment of these methods is based on three different empirical criteria: across-slide variability, the Kolmogorov-Smirnov (K-S) statistic and the mean square error (MSE). Compared with other methods, the GPA method performs effectively and consistently better in reducing across-slide variability and removing systematic bias.</p> <p>Conclusion</p> <p>The GPA method is an effective normalization approach for microarray data analysis. In particular, it is free from the statistical and biological assumptions inherent in other normalization methods that are often difficult to validate. Therefore, the GPA method has a major advantage in that it can be applied to diverse types of array sets, especially to the boutique array where the majority of genes may be differentially expressed.</p
    • …
    corecore