123 research outputs found

    Multimodal Optimal Transport-based Co-Attention Transformer with Global Structure Consistency for Survival Prediction

    Full text link
    Survival prediction is a complicated ordinal regression task that aims to predict the ranking risk of death, which generally benefits from the integration of histology and genomic data. Despite the progress in joint learning from pathology and genomics, existing methods still suffer from challenging issues: 1) Due to the large size of pathological images, it is difficult to effectively represent the gigapixel whole slide images (WSIs). 2) Interactions within tumor microenvironment (TME) in histology are essential for survival analysis. Although current approaches attempt to model these interactions via co-attention between histology and genomic data, they focus on only dense local similarity across modalities, which fails to capture global consistency between potential structures, i.e. TME-related interactions of histology and co-expression of genomic data. To address these challenges, we propose a Multimodal Optimal Transport-based Co-Attention Transformer framework with global structure consistency, in which optimal transport (OT) is applied to match patches of a WSI and genes embeddings for selecting informative patches to represent the gigapixel WSI. More importantly, OT-based co-attention provides a global awareness to effectively capture structural interactions within TME for survival prediction. To overcome high computational complexity of OT, we propose a robust and efficient implementation over micro-batch of WSI patches by approximating the original OT with unbalanced mini-batch OT. Extensive experiments show the superiority of our method on five benchmark datasets compared to the state-of-the-art methods. The code is released.Comment: 11 pages, 4 figures, accepted by ICCV 202

    Unsupervised Domain Adaptation via Deep Hierarchical Optimal Transport

    Full text link
    Unsupervised domain adaptation is a challenging task that aims to estimate a transferable model for unlabeled target domain by exploiting source labeled data. Optimal Transport (OT) based methods recently have been proven to be a promising direction for domain adaptation due to their competitive performance. However, most of these methods coarsely aligned source and target distributions, leading to the over-aligned problem where the category-discriminative information is mixed up although domain-invariant representations can be learned. In this paper, we propose a Deep Hierarchical Optimal Transport method (DeepHOT) for unsupervised domain adaptation. The main idea is to use hierarchical optimal transport to learn both domain-invariant and category-discriminative representations by mining the rich structural correlations among domain data. The DeepHOT framework consists of a domain-level OT and an image-level OT, where the latter is used as the ground distance metric for the former. The image-level OT captures structural associations of local image regions that are beneficial to image classification, while the domain-level OT learns domain-invariant representations by leveraging the underlying geometry of domains. However, due to the high computational complexity, the optimal transport based models are limited in some scenarios. To this end, we propose a robust and efficient implementation of the DeepHOT framework by approximating origin OT with sliced Wasserstein distance in image-level OT and using a mini-batch unbalanced optimal transport for domain-level OT. Extensive experiments show that DeepHOT surpasses the state-of-the-art methods in four benchmark datasets. Code will be released on GitHub.Comment: 9 pages, 3 figure

    Dynamic Embedding Size Search with Minimum Regret for Streaming Recommender System

    Full text link
    With the continuous increase of users and items, conventional recommender systems trained on static datasets can hardly adapt to changing environments. The high-throughput data requires the model to be updated in a timely manner for capturing the user interest dynamics, which leads to the emergence of streaming recommender systems. Due to the prevalence of deep learning-based recommender systems, the embedding layer is widely adopted to represent the characteristics of users, items, and other features in low-dimensional vectors. However, it has been proved that setting an identical and static embedding size is sub-optimal in terms of recommendation performance and memory cost, especially for streaming recommendations. To tackle this problem, we first rethink the streaming model update process and model the dynamic embedding size search as a bandit problem. Then, we analyze and quantify the factors that influence the optimal embedding sizes from the statistics perspective. Based on this, we propose the \textbf{D}ynamic \textbf{E}mbedding \textbf{S}ize \textbf{S}earch (\textbf{DESS}) method to minimize the embedding size selection regret on both user and item sides in a non-stationary manner. Theoretically, we obtain a sublinear regret upper bound superior to previous methods. Empirical results across two recommendation tasks on four public datasets also demonstrate that our approach can achieve better streaming recommendation performance with lower memory cost and higher time efficiency.Comment: Accepted for publication on CIKM202

    Pyrimido[4,5‐ d ]pyrimidin‐4(1 H )‐one Derivatives as Selective Inhibitors of EGFR Threonine 790 to Methionine 790 (T790M) Mutants

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/99681/1/8387_ftp.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/99681/2/anie_201302313_sm_miscellaneous_information.pd

    Pyrimido[4,5‐ d ]pyrimidin‐4(1 H )‐one Derivatives as Selective Inhibitors of EGFR Threonine 790 to Methionine 790 (T790M) Mutants

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/99673/1/8545_ftp.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/99673/2/ange_201302313_sm_miscellaneous_information.pd
    corecore