42 research outputs found

    Rethinking Data Augmentation for Single-source Domain Generalization in Medical Image Segmentation

    Full text link
    Single-source domain generalization (SDG) in medical image segmentation is a challenging yet essential task as domain shifts are quite common among clinical image datasets. Previous attempts most conduct global-only/random augmentation. Their augmented samples are usually insufficient in diversity and informativeness, thus failing to cover the possible target domain distribution. In this paper, we rethink the data augmentation strategy for SDG in medical image segmentation. Motivated by the class-level representation invariance and style mutability of medical images, we hypothesize that unseen target data can be sampled from a linear combination of CC (the class number) random variables, where each variable follows a location-scale distribution at the class level. Accordingly, data augmented can be readily made by sampling the random variables through a general form. On the empirical front, we implement such strategy with constrained BeËŠ\acute{\rm e}zier transformation on both global and local (i.e. class-level) regions, which can largely increase the augmentation diversity. A Saliency-balancing Fusion mechanism is further proposed to enrich the informativeness by engaging the gradient information, guiding augmentation with proper orientation and magnitude. As an important contribution, we prove theoretically that our proposed augmentation can lead to an upper bound of the generalization risk on the unseen target domain, thus confirming our hypothesis. Combining the two strategies, our Saliency-balancing Location-scale Augmentation (SLAug) exceeds the state-of-the-art works by a large margin in two challenging SDG tasks. Code is available at https://github.com/Kaiseem/SLAug

    Optogenetic Control of Non-Apoptotic Cell Death

    Get PDF
    Herein, a set of optogenetic tools (designated LiPOP) that enable photoswitchable necroptosis and pyroptosis in live cells with varying kinetics, is introduced. The LiPOP tools allow reconstruction of the key molecular steps involved in these two non-apoptotic cell death pathways by harnessing the power of light. Further, the use of LiPOPs coupled with upconversion nanoparticles or bioluminescence is demonstrated to achieve wireless optogenetic or chemo-optogenetic killing of cancer cells in multiple mouse tumor models. LiPOPs can trigger necroptotic and pyroptotic cell death in cultured prokaryotic or eukaryotic cells and in living animals, and set the stage for studying the role of non-apoptotic cell death pathways during microbial infection and anti-tumor immunity

    Identification of WRKY gene family members in amaranth based on a transcriptome database and functional analysis of AtrWRKY42-2 in betalain metabolism

    Get PDF
    IntroductionWRKY TFs (WRKY transcription factors) contribute to the synthesis of secondary metabolites in plants. Betalains are natural pigments that do not coexist with anthocyanins within the same plant. Amaranthus tricolor (‘Suxian No.1’) is an important leaf vegetable rich in betalains. However, the WRKY family members in amaranth and their roles in betalain synthesis and metabolism are still unclear.MethodsTo elucidate the molecular characteristics of the amaranth WRKY gene family and its role in betalain synthesis, WRKY gene family members were screened and identified using amaranth transcriptome data, and their physicochemical properties, conserved domains, phylogenetic relationships, and conserved motifs were analyzed using bioinformatics methods.ResultsIn total, 72 WRKY family members were identified from the amaranth transcriptome. Three WRKY genes involved in betalain synthesis were screened in the phylogenetic analysis of WRKY TFs. RT-qPCR showed that the expression levels of these three genes in red amaranth ‘Suxian No.1’ were higher than those in green amaranth ‘Suxian No.2’ and also showed that the expression level of AtrWRKY42 gene short-spliced transcript AtrWRKY42-2 in Amaranth ‘Suxian No.1’ was higher than that of the complete sequence AtrWRKY42-1, so the short-spliced transcript AtrWRKY42-2 was mainly expressed in ‘Suxian No.2’ amaranth. Moreover, the total expression levels of AtrWRKY42-1 and AtrWRKY42-2 were down-regulated after GA3 treatment, so AtrWRKY42-2 was identified as a candidate gene. Therefore, the short splice variant AtrWRKY42-2 cDNA sequence, gDNA sequence, and promoter sequence of AtrWRKY42 were cloned, and the PRI 101-AN-AtrWRKY42-2-EGFP vector was constructed to evaluate subcellular localization, revealing that AtrWRKY42-2 is located in the nucleus. The overexpression vector pRI 101-AN-AtrWRKY42-2-EGFP and VIGS (virus-induced gene silencing) vector pTRV2-AtrWRKY42-2 were transferred into leaves of ‘Suxian No.1’ by an Agrobacterium-mediated method. The results showed that AtrWRKY42-2 overexpression could promote the expression of AtrCYP76AD1 and increase betalain synthesis. A yeast one-hybrid assay demonstrated that AtrWRKY42-2 could bind to the AtrCYP76AD1 promoter to regulate betalain synthesis.DiscussionThis study lays a foundation for further exploring the function of AtrWRKY42-2 in betalain metabolism

    WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmark for Autonomous Driving on Water Surfaces

    Full text link
    Autonomous driving on water surfaces plays an essential role in executing hazardous and time-consuming missions, such as maritime surveillance, survivors rescue, environmental monitoring, hydrography mapping and waste cleaning. This work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset for autonomous driving on water surfaces. Equipped with a 4D radar and a monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather solutions for discerning object-related information, including color, shape, texture, range, velocity, azimuth, and elevation. Focusing on typical static and dynamic objects on water surfaces, we label the camera images and radar point clouds at pixel-level and point-level, respectively. In addition to basic perception tasks, such as object detection, instance segmentation and semantic segmentation, we also provide annotations for free-space segmentation and waterline segmentation. Leveraging the multi-task and multi-modal data, we conduct numerous experiments on the single modality of radar and camera, as well as the fused modalities. Results demonstrate that 4D radar-camera fusion can considerably enhance the robustness of perception on water surfaces, especially in adverse lighting and weather conditions. WaterScenes dataset is public on https://waterscenes.github.io

    A novel 3D unsupervised domain adaptation framework for cross-modality medical image segmentation

    Get PDF
    We consider the problem of volumetric (3D) unsupervised domain adaptation (UDA) in cross-modality medical image segmentation, aiming to perform segmentation on the unannotated target domain (e.g. MRI) with the help of labeled source domain (e.g. CT). Previous UDA methods in medical image analysis usually suffer from two challenges: 1) they focus on processing and analyzing data at 2D level only, thus missing semantic information from the depth level; 2) one-to-one mapping is adopted during the style-transfer process, leading to insufficient alignment in the target domain. Different from the existing methods, in our work, we conduct a first of its kind investigation on multi-style image translation for complete image alignment to alleviate the domain shift problem, and also introduce 3D segmentation in domain adaptation tasks to maintain semantic consistency at the depth level. In particular, we develop an unsupervised domain adaptation framework incorporating a novel quartet self-attention module to efficiently enhance relationships between widely separated features in spatial regions on a higher dimension, leading to a substantial improvement in segmentation accuracy in the unlabeled target domain. In two challenging cross-modality tasks, specifically brain structures and multi-organ abdominal segmentation, our model is shown to outperform current state-of-the-art methods by a significant margin, demonstrating its potential as a benchmark resource for the biomedical and health informatics research community

    Evaluation of a computer-aided diagnostic model for corneal diseases by analyzing in vivo confocal microscopy images

    Get PDF
    ObjectiveIn order to automatically and rapidly recognize the layers of corneal images using in vivo confocal microscopy (IVCM) and classify them into normal and abnormal images, a computer-aided diagnostic model was developed and tested based on deep learning to reduce physicians’ workload.MethodsA total of 19,612 corneal images were retrospectively collected from 423 patients who underwent IVCM between January 2021 and August 2022 from Renmin Hospital of Wuhan University (Wuhan, China) and Zhongnan Hospital of Wuhan University (Wuhan, China). Images were then reviewed and categorized by three corneal specialists before training and testing the models, including the layer recognition model (epithelium, bowman’s membrane, stroma, and endothelium) and diagnostic model, to identify the layers of corneal images and distinguish normal images from abnormal images. Totally, 580 database-independent IVCM images were used in a human-machine competition to assess the speed and accuracy of image recognition by 4 ophthalmologists and artificial intelligence (AI). To evaluate the efficacy of the model, 8 trainees were employed to recognize these 580 images both with and without model assistance, and the results of the two evaluations were analyzed to explore the effects of model assistance.ResultsThe accuracy of the model reached 0.914, 0.957, 0.967, and 0.950 for the recognition of 4 layers of epithelium, bowman’s membrane, stroma, and endothelium in the internal test dataset, respectively, and it was 0.961, 0.932, 0.945, and 0.959 for the recognition of normal/abnormal images at each layer, respectively. In the external test dataset, the accuracy of the recognition of corneal layers was 0.960, 0.965, 0.966, and 0.964, respectively, and the accuracy of normal/abnormal image recognition was 0.983, 0.972, 0.940, and 0.982, respectively. In the human-machine competition, the model achieved an accuracy of 0.929, which was similar to that of specialists and higher than that of senior physicians, and the recognition speed was 237 times faster than that of specialists. With model assistance, the accuracy of trainees increased from 0.712 to 0.886.ConclusionA computer-aided diagnostic model was developed for IVCM images based on deep learning, which rapidly recognized the layers of corneal images and classified them as normal and abnormal. This model can increase the efficacy of clinical diagnosis and assist physicians in training and learning for clinical purposes

    CD79A work as a potential target for the prognosis of patients with OSCC: analysis of immune cell infiltration in oral squamous cell carcinoma based on the CIBERSORTx deconvolution algorithm

    No full text
    Abstract Objective To analyze the abundance of infiltrating tumor immune cells in patients with oral squamous cell carcinoma (OSCC) and to search for potential targets that can predict patient prognosis. Methods A total of 400 samples from 210 patients with OSCC were collected using The Cancer Genome Atlas (TCGA) database. CIBERSORTx was used to evaluate the infiltration abundance of tumor immune cells. Potential target genes were searched to predict patient prognosis through case grouping, differential analysis, and enrichment analysis. Surgical excisional tissue sections of patients with oral squamous cell carcinoma admitted to the Department of Oral and Maxillofacial Surgery, Second Affiliated Hospital of Shantou University Medical College, from 2015 to 2018 were collected and followed up. Results The CIBERSORTx deconvolution algorithm was used to analyze the infiltration abundance of immune cells in the samples. Cases with a high infiltration abundance of naive and memory B lymphocytes improved the prognosis of OSCC patients. The prognosis of patients with low CD79A expression was significantly better than that of patients with high CD79A expression. Conclusion CD79A can predict the infiltration abundance of B lymphocytes in the tumor microenvironment of patients with OSCC. CD79A is a potential target for predicting the prognosis of patients with OSCC. This study provides novel ideas for the treatment of OSCC and for predicting patient prognosis
    corecore