275 research outputs found

    Statistical Methods for Analyzing Rare Variant Complex Trait Associations via Sequence Data

    Get PDF
    There is solid evidence that complex human diseases can be caused by rare variants. Next generation sequencing technology has revolutionized the study of complex human diseases, and made possible detecting associations with rare variants. Traditional statistical methods can be inefficient for analyzing sequence data and underpowered. In addition, due to high cost of sequencing, it is also necessary to explore novel cost effective studies in order to maximize power and reduce sequencing cost. In this thesis, three important problems for analyzing sequence data and detecting associations with rare variants are presented. In the first chapter, we presented a new method for detecting rare variants/binary trait associations in the presence of gene interactions. In the second chapter, we explored cost effective study designs for replicating sequence based association studies, combining both sequencing and customized genotyping. In the third chapter, we present a method for analyzing multiple phenotypes in selected samples, such that phenotypes that are commonly measured in different studies can be jointly analyzed to improve power. The methods and study designs presented are important for dissecting complex trait etiologies using sequence data

    Robust Core-Periphery Constrained Transformer for Domain Adaptation

    Full text link
    Unsupervised domain adaptation (UDA) aims to learn transferable representation across domains. Recently a few UDA works have successfully applied Transformer-based methods and achieved state-of-the-art (SOTA) results. However, it remains challenging when there exists a large domain gap between the source and target domain. Inspired by humans' exceptional transferability abilities to adapt knowledge from familiar to uncharted domains, we try to apply the universally existing organizational structure in the human functional brain networks, i.e., the core-periphery principle to design the Transformer and improve its UDA performance. In this paper, we propose a novel brain-inspired robust core-periphery constrained transformer (RCCT) for unsupervised domain adaptation, which brings a large margin of performance improvement on various datasets. Specifically, in RCCT, the self-attention operation across image patches is rescheduled by an adaptively learned weighted graph with the Core-Periphery structure (CP graph), where the information communication and exchange between images patches are manipulated and controlled by the connection strength, i.e., edge weight of the learned weighted CP graph. Besides, since the data in domain adaptation tasks can be noisy, to improve the model robustness, we intentionally add perturbations to the patches in the latent space to ensure generating robust learned weighted core-periphery graphs. Extensive evaluations are conducted on several widely tested UDA benchmarks. Our proposed RCCT consistently performs best compared to existing works, including 88.3\% on Office-Home, 95.0\% on Office-31, 90.7\% on VisDA-2017, and 46.0\% on DomainNet.Comment: Core-Periphery, ViT, Unsupervised domain adaptatio

    Exploring the Influence of Information Entropy Change in Learning Systems

    Full text link
    In this work, we explore the influence of entropy change in deep learning systems by adding noise to the inputs/latent features. The applications in this paper focus on deep learning tasks within computer vision, but the proposed theory can be further applied to other fields. Noise is conventionally viewed as a harmful perturbation in various deep learning architectures, such as convolutional neural networks (CNNs) and vision transformers (ViTs), as well as different learning tasks like image classification and transfer learning. However, this paper aims to rethink whether the conventional proposition always holds. We demonstrate that specific noise can boost the performance of various deep architectures under certain conditions. We theoretically prove the enhancement gained from positive noise by reducing the task complexity defined by information entropy and experimentally show the significant performance gain in large image datasets, such as the ImageNet. Herein, we use the information entropy to define the complexity of the task. We categorize the noise into two types, positive noise (PN) and harmful noise (HN), based on whether the noise can help reduce the complexity of the task. Extensive experiments of CNNs and ViTs have shown performance improvements by proactively injecting positive noise, where we achieved an unprecedented top 1 accuracy of over 95% on ImageNet. Both theoretical analysis and empirical evidence have confirmed that the presence of positive noise can benefit the learning process, while the traditionally perceived harmful noise indeed impairs deep learning models. The different roles of noise offer new explanations for deep models on specific tasks and provide a new paradigm for improving model performance. Moreover, it reminds us that we can influence the performance of learning systems via information entropy change.Comment: Information Entropy, CNN, Transforme

    Core-Periphery Principle Guided Redesign of Self-Attention in Transformers

    Full text link
    Designing more efficient, reliable, and explainable neural network architectures is critical to studies that are based on artificial intelligence (AI) techniques. Previous studies, by post-hoc analysis, have found that the best-performing ANNs surprisingly resemble biological neural networks (BNN), which indicates that ANNs and BNNs may share some common principles to achieve optimal performance in either machine learning or cognitive/behavior tasks. Inspired by this phenomenon, we proactively instill organizational principles of BNNs to guide the redesign of ANNs. We leverage the Core-Periphery (CP) organization, which is widely found in human brain networks, to guide the information communication mechanism in the self-attention of vision transformer (ViT) and name this novel framework as CP-ViT. In CP-ViT, the attention operation between nodes is defined by a sparse graph with a Core-Periphery structure (CP graph), where the core nodes are redesigned and reorganized to play an integrative role and serve as a center for other periphery nodes to exchange information. We evaluated the proposed CP-ViT on multiple public datasets, including medical image datasets (INbreast) and natural image datasets. Interestingly, by incorporating the BNN-derived principle (CP structure) into the redesign of ViT, our CP-ViT outperforms other state-of-the-art ANNs. In general, our work advances the state of the art in three aspects: 1) This work provides novel insights for brain-inspired AI: we can utilize the principles found in BNNs to guide and improve our ANN architecture design; 2) We show that there exist sweet spots of CP graphs that lead to CP-ViTs with significantly improved performance; and 3) The core nodes in CP-ViT correspond to task-related meaningful and important image patches, which can significantly enhance the interpretability of the trained deep model.Comment: Core-periphery, functional brain networks, Vi

    Hierarchical Semantic Tree Concept Whitening for Interpretable Image Classification

    Full text link
    With the popularity of deep neural networks (DNNs), model interpretability is becoming a critical concern. Many approaches have been developed to tackle the problem through post-hoc analysis, such as explaining how predictions are made or understanding the meaning of neurons in middle layers. Nevertheless, these methods can only discover the patterns or rules that naturally exist in models. In this work, rather than relying on post-hoc schemes, we proactively instill knowledge to alter the representation of human-understandable concepts in hidden layers. Specifically, we use a hierarchical tree of semantic concepts to store the knowledge, which is leveraged to regularize the representations of image data instances while training deep models. The axes of the latent space are aligned with the semantic concepts, where the hierarchical relations between concepts are also preserved. Experiments on real-world image datasets show that our method improves model interpretability, showing better disentanglement of semantic concepts, without negatively affecting model classification performance

    Artificial General Intelligence for Medical Imaging

    Full text link
    In this review, we explore the potential applications of Artificial General Intelligence (AGI) models in healthcare, focusing on foundational Large Language Models (LLMs), Large Vision Models, and Large Multimodal Models. We emphasize the importance of integrating clinical expertise, domain knowledge, and multimodal capabilities into AGI models. In addition, we lay out key roadmaps that guide the development and deployment of healthcare AGI models. Throughout the review, we provide critical perspectives on the potential challenges and pitfalls associated with deploying large-scale AGI models in the medical field. This comprehensive review aims to offer insights into the future implications of AGI in medical imaging, healthcare and beyond

    Segment Anything Model (SAM) for Radiation Oncology

    Full text link
    In this study, we evaluate the performance of the Segment Anything Model (SAM) model in clinical radiotherapy. We collected real clinical cases from four regions at the Mayo Clinic: prostate, lung, gastrointestinal, and head \& neck, which are typical treatment sites in radiation oncology. For each case, we selected the OARs of concern in radiotherapy planning and compared the Dice and Jaccard outcomes between clinical manual delineation, automatic segmentation using SAM's "segment anything" mode, and automatic segmentation using SAM with box prompt. Our results indicate that SAM performs better in automatic segmentation for the prostate and lung regions, while its performance in the gastrointestinal and head \& neck regions was relatively inferior. When considering the size of the organ and the clarity of its boundary, SAM displays better performance for larger organs with clear boundaries, such as the lung and liver, and worse for smaller organs with unclear boundaries, like the parotid and cochlea. These findings align with the generally accepted variations in difficulty level associated with manual delineation of different organs at different sites in clinical radiotherapy. Given that SAM, a single trained model, could handle the delineation of OARs in four regions, these results also demonstrate SAM's robust generalization capabilities in automatic segmentation for radiotherapy, i.e., achieving delineation of different radiotherapy OARs using a generic automatic segmentation model. SAM's generalization capabilities across different regions make it technically feasible to develop a generic model for automatic segmentation in radiotherapy

    RadOnc-GPT: A Large Language Model for Radiation Oncology

    Full text link
    This paper presents RadOnc-GPT, a large language model specialized for radiation oncology through advanced tuning methods. RadOnc-GPT was finetuned on a large dataset of radiation oncology patient records and clinical notes from the Mayo Clinic in Arizona. The model employs instruction tuning on three key tasks - generating radiotherapy treatment regimens, determining optimal radiation modalities, and providing diagnostic descriptions/ICD codes based on patient diagnostic details. Evaluations conducted by comparing RadOnc-GPT outputs to general large language model outputs showed that RadOnc-GPT generated outputs with significantly improved clarity, specificity, and clinical relevance. The study demonstrated the potential of using large language models fine-tuned using domain-specific knowledge like RadOnc-GPT to achieve transformational capabilities in highly specialized healthcare fields such as radiation oncology
    corecore