449 research outputs found

    Deep generative models for biology: represent, predict, design

    Get PDF
    Deep generative models have revolutionized the field of artificial intelligence, fundamentally changing how we generate novel objects that imitate or extrapolate from training data, and transforming how we access and consume various types of information such as texts, images, speech, and computer programs. They have the potential to radically transform other scientific disciplines, ranging from mathematical problem solving, to supporting fast and accurate simulations in high-energy physics or enabling rapid weather forecasting. In computational biology, generative models hold immense promise for improving our understanding of complex biological processes, designing new drugs and therapies, and forecasting viral evolution during pandemics, among many other applications. Biological objects pose however unique challenges due to their inherent complexity, encompassing massive spaces, multiple complementary data modalities, and a unique interplay between highly structured and relatively unstructured components. In this thesis, we develop several deep generative modeling frameworks that are motivated by key questions in computational biology. Given the interdisciplinary nature of this endeavor, we first provide a comprehensive background in generative modeling, uncertainty quantification, sequential decision making, as well as important concepts in biology and chemistry to facilitate a thorough understanding of our work. We then deep dive into the core of our contributions, which are structured around three chapters. The first chapter introduces methods for learning representations of biological sequences, laying the foundation for subsequent analyses. The second chapter illustrates how these representations can be leveraged to predict complex properties of biomolecules, focusing on three specific applications: protein fitness prediction, the effects of genetic variations on human disease risk and viral immune escape. Finally, the third chapter is dedicated to methods for designing novel biomolecules, including drug target identification, de novo molecular optimization, and protein engineering. This thesis also makes several methodological contributions to broader machine learning challenges, such as uncertainty quantification in high-dimensional spaces or efficient transformer architectures, which hold potential value in other application domains. We conclude by summarizing our key findings, highlighting shortcomings of current approaches, proposing potential avenues for future research, and discussing emerging trends within the field

    Learning with Attributed Networks: Algorithms and Applications

    Get PDF
    abstract: Attributes - that delineating the properties of data, and connections - that describing the dependencies of data, are two essential components to characterize most real-world phenomena. The synergy between these two principal elements renders a unique data representation - the attributed networks. In many cases, people are inundated with vast amounts of data that can be structured into attributed networks, and their use has been attractive to researchers and practitioners in different disciplines. For example, in social media, users interact with each other and also post personalized content; in scientific collaboration, researchers cooperate and are distinct from peers by their unique research interests; in complex diseases studies, rich gene expression complements to the gene-regulatory networks. Clearly, attributed networks are ubiquitous and form a critical component of modern information infrastructure. To gain deep insights from such networks, it requires a fundamental understanding of their unique characteristics and be aware of the related computational challenges. My dissertation research aims to develop a suite of novel learning algorithms to understand, characterize, and gain actionable insights from attributed networks, to benefit high-impact real-world applications. In the first part of this dissertation, I mainly focus on developing learning algorithms for attributed networks in a static environment at two different levels: (i) attribute level - by designing feature selection algorithms to find high-quality features that are tightly correlated with the network topology; and (ii) node level - by presenting network embedding algorithms to learn discriminative node embeddings by preserving node proximity w.r.t. network topology structure and node attribute similarity. As changes are essential components of attributed networks and the results of learning algorithms will become stale over time, in the second part of this dissertation, I propose a family of online algorithms for attributed networks in a dynamic environment to continuously update the learning results on the fly. In fact, developing application-aware learning algorithms is more desired with a clear understanding of the application domains and their unique intents. As such, in the third part of this dissertation, I am also committed to advancing real-world applications on attributed networks by incorporating the objectives of external tasks into the learning process.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    엣지 μž₯λΉ„λ₯Ό μœ„ν•œ ν•œμ •λœ 데이터λ₯Ό κ°€μ§€λŠ” λ”₯λŸ¬λ‹ λΉ„μ „ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ λΉ λ₯Έ 적응

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀, 2022.2. 유승주.λ”₯ λŸ¬λ‹ 기반 λ°©λ²•μ˜ λ†€λΌμš΄ 성곡은 주둜 λ§Žμ€ μ–‘μ˜ λΆ„λ₯˜λœ λ°μ΄ν„°λ‘œ λ‹¬μ„±λ˜μ—ˆλ‹€. 전톡적인 기계 ν•™μŠ΅ 방법과 λΉ„κ΅ν•΄μ„œ λ”₯λŸ¬λ‹ 방법은 μ•„μ£Ό 큰 λ°μ΄ν„°μ…‹μœΌλ‘œλΆ€ν„° 쒋은 μ„±λŠ₯을 가진 λͺ¨λΈμ„ ν•™μŠ΅ν•  수 μžˆλ‹€. ν•˜μ§€λ§Œ κ³ ν’ˆμ§ˆμ˜ λΆ„λ₯˜λœ λ°μ΄ν„°λŠ” λ§Œλ“€κΈ° μ–΄λ ΅κ³  ν”„λΌμ΄λ²„μ‹œ 문제둜 λ§Œλ“€ 수 없을 λ•Œλ„ μžˆλ‹€. κ²Œλ‹€κ°€ μ‚¬λžŒμ€ μ•„μ£Ό 큰 λΆ„λ₯˜λœ 데이터가 없어도 ν›Œλ₯­ν•œ μΌλ°˜ν™” λŠ₯λ ₯을 보여쀀닀. 엣지 μž₯λΉ„λŠ” μ„œλ²„μ™€ λΉ„κ΅ν•΄μ„œ μ œν•œμ μΈ 계산 λŠ₯λ ₯을 가진닀. 특히 ν•™μŠ΅ 과정을 엣지 μž₯λΉ„μ—μ„œ μˆ˜ν–‰ν•˜λŠ” 것은 맀우 μ–΄λ ΅λ‹€. ν•˜μ§€λ§Œ, 도메인 λ³€ν™” λ¬Έμ œμ™€ ν”„λΌμ΄λ²„μ‹œ 문제λ₯Ό κ³ λ €ν–ˆμ„ λ•Œ 엣지 μž₯λΉ„μ—μ„œ ν•™μŠ΅ 과정을 μˆ˜ν–‰ν•˜λŠ” 것은 λ°”λžŒμ§ν•˜λ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 계산λŠ₯λ ₯이 μž‘μ€ 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ 적응 과정을 전톡적인 ν•™μŠ΅ κ³Όμ • λŒ€μ‹  κ³ λ €ν•œλ‹€. 전톡적인 λΆ„λ₯˜ λ¬Έμ œλŠ” ν•™μŠ΅ 데이터와 ν…ŒμŠ€νŠΈ 데이터가 λ™μΌν•œ λΆ„ν¬μ—μ„œ νŒŒμƒλ˜μ—ˆμŒκ³Ό λ§Žμ€ μ–‘μ˜ ν•™μŠ΅ 데이터λ₯Ό κ°€μ •ν•œλ‹€. 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜μ€ ν…ŒμŠ€νŠΈ 데이터가 ν•™μŠ΅λ°μ΄ν„°μ™€ λ‹€λ₯Έ λΆ„ν¬μ—μ„œ νŒŒμƒλ˜λŠ” 상황을 κ°€μ •ν•˜λ©° 기쑴의 λΆ„λ₯˜λœ 데이터와 ν•™μŠ΅λœ λͺ¨λΈμ„ μ΄μš©ν•΄ μƒˆλ‘œμš΄ 데이터λ₯Ό λΆ„λ₯˜ν•˜λŠ” λ¬Έμ œμ΄λ‹€. 퓨샷 ν•™μŠ΅μ€ 적은 μ–‘μ˜ ν•™μŠ΅ 데이터λ₯Ό κ°€μ •ν•˜λ©° μ†Œμˆ˜μ˜ λΆ„λ₯˜λœ λ°μ΄ν„°λ§Œμ„ 가지고 μƒˆλ‘œμš΄ 데이터λ₯Ό λΆ„λ₯˜ν•˜λŠ” λ¬Έμ œμ΄λ‹€. 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ 톡해 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법과 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 퓨샷 ν•™μŠ΅ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법을 μ œμ•ˆν•˜μ˜€λ‹€. 두 방법은 λͺ¨λ‘ 적은 λΆ„λ₯˜λœ 데이터 문제λ₯Ό 닀루며 λ‹€λ§Œ μ„œλ‘œ λ‹€λ₯Έ μ‹œλ‚˜λ¦¬μ˜€λ₯Ό κ°€μ •ν•œλ‹€. 첫 번째 방법은 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ λ„€νŠΈμ›Œν¬ λͺ¨λΈκ³Ό νŒŒλΌλ―Έν„° μ„ νƒμ˜ λ™μ‹œ μ΅œμ ν™”λ₯Ό 톡해 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법이닀. μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ€ Office 데이터셋과 같이 μž‘μ€ 데이터셋을 λ‹€λ£°λ•Œ 맀우 μ€‘μš”ν•˜λ‹€. νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό κ°±μ‹ ν•˜μ§€ μ•ŠλŠ” 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ•Œκ³ λ¦¬μ¦˜μ„ μ‚¬μš©ν•˜κ³  μ•„μ£Ό 큰 μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ‘°ν•©ν•˜λŠ” λ°©λ²•μœΌλ‘œ 높은 정확도λ₯Ό 얻을 수 μžˆλ‹€. 더 λ‚˜μ•„κ°€ 엣지 μž₯λΉ„λ₯Ό μœ„ν•΄ μž‘κ³  κ°€λ²Όμš΄ μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ‹€ν—˜ν•˜μ˜€λ‹€. μ§€μ—°μ‹œκ°„μ„ 쀄이기 μœ„ν•΄ 예츑기λ₯Ό λ„μž…ν•œ 진화 μ•Œκ³ λ¦¬μ¦˜μœΌλ‘œ λ°©λ²•μ˜ μ‹œμž‘λΆ€ν„° λκΉŒμ§€ μ΅œμ ν™”ν•˜μ˜€λ‹€. 그리고 ν”„λΌμ΄λ²„μ‹œλ₯Ό 지킀기 μœ„ν•œ 비지도 도메인 μ–΄λŒ‘ν…Œμ΄μ…˜ μ‹œλ‚˜λ¦¬μ˜€μ— λŒ€ν•΄ κ³ λ €ν•˜μ˜€λ‹€. λ˜ν•œ 엣지 μž₯λΉ„μ—μ„œ μ’€ 더 ν˜„μ‹€μ μΈ μ‹œλ‚˜λ¦¬μ˜€μΈ μž‘μ€ 데이터셋과 object detection 에 λŒ€ν•΄μ„œλ„ μ‹€ν—˜ν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ 연속적인 데이터가 μž…λ ₯될 λ•Œ 쀑간 데이터λ₯Ό ν™œμš©ν•˜μ—¬ μ§€μ—°μ‹œκ°„μ„ 더 κ°μ†Œμ‹œν‚€λŠ” 방법을 μ‹€ν—˜ν•˜μ˜€λ‹€. Office31κ³Ό Office-Home 데이터셋에 λŒ€ν•΄ 각각 5.99배와 9.06λ°° μ§€μ—°μ‹œκ°„ κ°μ†Œλ₯Ό λ‹¬μ„±ν•˜μ˜€λ‹€. 두 번째 방법은 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 퓨샷 ν•™μŠ΅ μ„±λŠ₯을 κ°•ν™”ν•˜λŠ” 방법이닀. 퓨샷 ν•™μŠ΅ λ²€μΉ˜λ§ˆν¬μ—μ„œλŠ” 베이슀 λ°μ΄ν„°μ…‹μœΌλ‘œ νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό ν•™μŠ΅ν•˜κΈ° λ•Œλ¬Έμ— μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ‚¬μš©ν•  수 μ—†λ‹€. λŒ€μ‹ μ—, 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό κ°•ν™”ν•œλ‹€. 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅κ³Ό 정보 μ΅œλŒ€ν™” 그리고 ν”„λ‘œν† νƒ€μž… μΆ”μ • 방법을 μ‘°ν•©ν•˜μ—¬ μ•„μ£Ό 높은 정확도λ₯Ό 얻을 수 μžˆλ‹€. νŠΉμ§• μΆ”μΆœκΈ°μ™€ 미리 끝내기λ₯Ό 톡해 μ΄λ ‡κ²Œ 얻은 정확도λ₯Ό μˆ˜ν–‰μ‹œκ°„ κ°μ†Œλ‘œ λ°”κΏ€ 수 μžˆλ‹€. νŠΈλžœμŠ€λ•ν‹°λΈŒ 5-웨이 5-μƒ· ν•™μŠ΅ μ‹œλ‚˜λ¦¬μ˜€μ—μ„œ 3.87λ°° μ§€μ—°μ‹œκ°„ κ°μ†Œλ₯Ό λ‹¬μ„±ν•˜μ˜€λ‹€. λ³Έ 방법은 정확도λ₯Ό μ¦κ°€μ‹œν‚¨ ν›„ μ§€μ—°μ‹œκ°„μ„ κ°μ†Œμ‹œν‚€λŠ” λ°©λ²•μœΌλ‘œ μš”μ•½ν•  수 μžˆλ‹€. λ¨Όμ € μ΄λ―Έμ§€λ„·μ—μ„œ 미리 ν•™μŠ΅λœ λͺ¨λΈμ„ μ“°κ±°λ‚˜ 지도 μ»¨νŠΈλΌμŠ€ν‹°λΈŒ ν•™μŠ΅μ„ 톡해 νŠΉμ§• μΆ”μΆœκΈ°λ₯Ό κ°•ν™”ν•΄μ„œ 높은 정확도λ₯Ό μ–»λŠ”λ‹€. κ·Έ ν›„ 진화 μ•Œκ³ λ¦¬μ¦˜μ„ 톡해 μ‹œμž‘λΆ€ν„° λκΉŒμ§€ μ΅œμ ν™”ν•˜κ±°λ‚˜ 미리 끝내기λ₯Ό 톡해 μ§€μ—°μ‹œκ°„μ„ 쀄인닀. 정확도λ₯Ό μ¦κ°€μ‹œν‚¨ ν›„ μ§€μ—°μ‹œκ°„μ„ κ°μ†Œμ‹œν‚€λŠ” 두 단계 μ ‘κ·Ό 방식은 엣지 μž₯λΉ„λ₯Ό μœ„ν•œ ν•œμ •λœ 데이터λ₯Ό κ°€μ§€λŠ” λ”₯λŸ¬λ‹ λΉ„μ „ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ˜ λΉ λ₯Έ 적응을 λ‹¬μ„±ν•˜λŠ”λ° μΆ©λΆ„ν•˜λ‹€.The remarkable success of deep learning-based methods are mainly accomplished by a large amount of labeled data. Compared to conventional machine learning methods, deep learning-based methods are able to learn high quality model with a large dataset size. However, high-quality labeled data is expensive to obtain and sometimes preparing a large dataset is impossible due to privacy concern. Furthermore, human shows outstanding generalization performance without a huge amount of labeled data. Edge devices have a limited capability in computation compared to servers. Especially, it is challenging to implement training on edge devices. However, training on edge device is desirable when considering domain-shift problem and privacy concern. In this dissertation, I consider adaptation process as a conventional training counterpart for low computation capability edge device. Conventional classification assumes that training data and test data are drawn from the same distribution and training dataset is large. Unsupervised domain adaptation addresses the problem when training data and test data are drawn from different distribution and it is a problem to label target domain data using already existing labeled data and models. Few-shot learning assumes small training dataset and it is a task to predict new data based on only a few labeled data. I present 1) co-optimization of backbone network and parameter selection in unsupervised domain adaptation for edge device and 2) augmenting few-shot learning with supervised contrastive learning. Both methods are targeting low labeled data regime but different scenarios. The first method is to boost unsupervised domain adaptation by co-optimization of backbone network and parameter selection for edge device. Pre-trained ImageNet models are crucial when dealing with small dataset such as Office datasets. By using unsupervised domain adaptation algorithm that does not update feature extractor, large and powerful pre-trained ImageNet models can be used to boost the accuracy. We report state-of-the-art accuracy result with the method. Moreover, we conduct an experiment to use small and lightweight pre-trained ImageNet models for edge device. Co-optimization is performed to reduce the total latency by using predictor-guided evolutionary search. We also consider pre-extraction of source feature. We conduct more realistic scenario for edge device such as smaller target domain data and object detection. Lastly, We conduct an experiment to utilize intermediate domain data to reduce the algorithm latency further. We achieve 5.99x and 9.06x latency reduction on Office31 and Office-Home dataset, respectively. The second method is to augment few-shot learning with supervised contrastive learning. We cannot use pre-trained ImageNet model in the few-shot learning benchmark scenario as they provide base dataset to train the feature extractor from scratch. Instead, we augment the feature extractor with supervised contrastive learning method. Combining supervised contrastive learning with information maximization and prototype estimation technique, we report state-of-the-art accuracy result with the method. Then, we translate the accuracy gain to total runtime reduction by changing the feature extractor and early stopping. We achieve 3.87x latency reduction for transductive 5-way 5-shot learning scenarios. Our approach can be summarized as boosting the accuracy followed by latency reduction. We first upgrade the feature extractor by using more advanced pre-trained ImageNet model or by supervised contrastive learning to achieve state-of-the-art accuracy. Then, we optimize the method end-to-end with evolutionary search or early stopping to reduce the latency. Our two stage approach which consists of accuracy boosting and latency reduction is sufficient to achieve fast adaptation of deep learning vision applications with limited data for edge device.1. Introduction 1 2. Background 7 2.1 Dataset Size for Vision Applications 7 2.2 ImageNet Pre-trained Models 9 2.3 Augmentation Methods for ImageNet 12 2.4 Contrastive Learning 14 3. Problem Definitions and Solutions Overview 17 3.1 Problem Definitions 17 3.1.1 Unsupervised Domain Adaptation 17 3.1.2 Few-shot learning 18 3.2 Solutions overview 19 3.2.1 Co-optimization of Backbone Network and Parameter Selection in Unsupervised Domain Adaptation for Edge Device 20 3.2.2 Augmenting Few-Shot Learning with Supervised Contrastive Learning 21 4. Co-optimization of Backbone Network and Parameter Selection in Unsupervised Domain Adaptation for Edge Device 22 4.1 Introduction 23 4.2 Related Works 28 4.3 Methodology 33 4.3.1 Examining an Unsupervised Domain Adaptation Method 33 4.3.2 Boosting Accuracy with Pre-Trained ImageNet Models 36 4.3.3 Boosting Accuracy for Edge Device 38 4.3.4 Co-optimization of Backbone Network and Parameter Selection 39 4.4 Experiments 41 4.4.1 ImageNet and Unsupervised Domain Adaptation Accuracy 43 4.4.2 Accuracy with Once-For-All Network 52 4.4.3 Comparison with State-of-the-Art Results 58 4.4.4 Co-optimization for Edge Device 59 4.4.5 Pre-extraction of Source Feature 72 4.4.6 Results for Small Target Data Scenario 77 4.4.7 Results for Object Detection 78 4.4.8 Results for Classifier Fitting Using Intermediate Domain 80 4.4.9 Summary 81 4.5 Conclusion 84 5. Augmenting Few-Shot Learning with Supervised Contrastive Learning 85 5.1 Introduction 86 5.2 Related Works 89 5.3 Methodology 92 5.3.1 Examining A Few-shot Learning Method 92 5.3.2 Augmenting Few-shot Learning with Supervised Contrastive Learning 94 5.4 Experiments 97 5.4.1 Comparison to the State-of-the-Art 99 5.4.2 Ablation Study 102 5.4.3 Domain-Shift 105 5.4.4 Increasing the Number of Ways 106 5.4.5 Runtime Analysis 107 5.4.6 Limitations 109 5.5 Conclusion 110 6. Conclusion 111λ°•

    Learning-Based Approaches for Graph Problems: A Survey

    Full text link
    Over the years, many graph problems specifically those in NP-complete are studied by a wide range of researchers. Some famous examples include graph colouring, travelling salesman problem and subgraph isomorphism. Most of these problems are typically addressed by exact algorithms, approximate algorithms and heuristics. There are however some drawback for each of these methods. Recent studies have employed learning-based frameworks such as machine learning techniques in solving these problems, given that they are useful in discovering new patterns in structured data that can be represented using graphs. This research direction has successfully attracted a considerable amount of attention. In this survey, we provide a systematic review mainly on classic graph problems in which learning-based approaches have been proposed in addressing the problems. We discuss the overview of each framework, and provide analyses based on the design and performance of the framework. Some potential research questions are also suggested. Ultimately, this survey gives a clearer insight and can be used as a stepping stone to the research community in studying problems in this field.Comment: v1: 41 pages; v2: 40 page
    • …
    corecore