29 research outputs found

    The Additional Symmetries for the BTL and CTL Hierarchies

    Full text link
    The Toda lattice (TL) hierarchy was first introduced by K.Ueno and K.Takasaki in \cite{uenotaksasai} to generalize the Toda lattice equations\cite{toda}. Along the work of E. Date, M. Jimbo, M. Kashiwara and T. Miwa \cite{DJKM} on the KP hierarchy, K.Ueno and K.Takasaki in \cite{uenotaksasai} develop the theory for the TL hierarchy: its algebraic structure, the linearization, the bilinear identity, τ\tau function and so on. Also the analogues of the B and C types for the TL hierarchy, i.e. the BTL and CTL hierarchies, are considered in \cite{uenotaksasai}, which are corresponding to infinite dimensional Lie algebras o(∞)\textmd{o}(\infty) and sp(∞)\textmd{sp}(\infty) respectively. In this paper, we will focus on the study of the additional symmetries for the BTL and CTL hierarchies.Comment: 13 page

    Multi-scale Transformer Network with Edge-aware Pre-training for Cross-Modality MR Image Synthesis

    Full text link
    Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones. Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model. However, it is often challenging to obtain sufficient paired data for supervised training. In reality, we often have a small number of paired data while a large number of unpaired data. To take advantage of both paired and unpaired data, in this paper, we propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. Specifically, an Edge-preserving Masked AutoEncoder (Edge-MAE) is first pre-trained in a self-supervised manner to simultaneously perform 1) image imputation for randomly masked patches in each image and 2) whole edge map estimation, which effectively learns both contextual and structural information. Besides, a novel patch-wise loss is proposed to enhance the performance of Edge-MAE by treating different masked patches differently according to the difficulties of their respective imputations. Based on this proposed pre-training, in the subsequent fine-tuning stage, a Dual-scale Selective Fusion (DSF) module is designed (in our MT-Net) to synthesize missing-modality images by integrating multi-scale features extracted from the encoder of the pre-trained Edge-MAE. Further, this pre-trained encoder is also employed to extract high-level features from the synthesized image and corresponding ground-truth image, which are required to be similar (consistent) in the training. Experimental results show that our MT-Net achieves comparable performance to the competing methods even using 70%70\% of all available paired data. Our code will be publicly available at https://github.com/lyhkevin/MT-Net.Comment: 13 pages, 15 figure

    Automatic Data Augmentation via Deep Reinforcement Learning for Effective Kidney Tumor Segmentation

    Full text link
    Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation. However, the data generated by these conventional augmentation methods are random and sometimes harmful to the subsequent segmentation. In this paper, we developed a novel automatic learning-based data augmentation method for medical image segmentation which models the augmentation task as a trial-and-error procedure using deep reinforcement learning (DRL). In our method, we innovatively combine the data augmentation module and the subsequent segmentation module in an end-to-end training manner with a consistent loss. Specifically, the best sequential combination of different basic operations is automatically learned by directly maximizing the performance improvement (\ie, Dice ratio) on the available validation set. We extensively evaluated our method on CT kidney tumor segmentation which validated the promising results of our method.Comment: 5 pages, 3 figure
    corecore