29,983 research outputs found

    Optimal Bayesian Transfer Learning for Classification and Regression

    Get PDF
    Machine learning methods and algorithms working under the assumption of identically and independently distributed (i.i.d.) data cannot be applicable when dealing with massive data collected from different sources or by various technologies, where heterogeneity of data is inevitable. In such scenarios where we are far from simple homogeneous and uni-modal distributions, we should address the data heterogeneity in a smart way in order to take the best advantages of data coming from different sources. In this dissertation we study two main sources of data heterogeneity, time and domain. We address the time by modeling the dynamics of data and the domain difference by transfer learning. Gene expression data have been used for many years for phenotype classification, for instance, classification of healthy versus cancerous tissues or classification of various types of diseases. The traditional methods use static gene expression data measured in one time point. We propose to take into account the dynamics of gene interactions through time, which can be governed by gene regulatory networks (GRN), and design the classifiers using gene expression trajectories instead of static data. Thanks to recent advanced sequencing technologies such as single-cell, we are now able to look inside a single cell and capture the dynamics of gene expressions. As a result, we design optimal classifiers using single-cell gene expression trajectories, whose dynamics are modeled via Boolean networks with perturbation (BNp). We solve this problem using both expectation maximization (EM) and Bayesian framework and show the great efficacy of these methods over classification via bulk RNA-Seq data. Transfer learning (TL) has recently attracted significant research attention, as it simultaneously learns from different source domains, which have plenty of labeled data, and transfers the relevant knowledge to the target domain with limited labeled data to improve the prediction performance. We study transfer learning with a novel Bayesian viewpoint. Transfer learning appears where we do not have enough data in our target domain to train the machine learning algorithms well but have good amount of data in other relevant source domains. The probability distributions of the source and target domains might be totally different but they share some knowledge underlying the similar tasks between the domains and are related to each other in some sense. The ultimate goal of transfer learning is to find the amount of relatedness between the domains and then transfer the amount of knowledge to the target domain which can help improve the classification task in the data-poor target domain. Negative transfer is the most vital issue in transfer learning and happens when the TL algorithm is not able to detect that the source domain is not related to the target domain for a specific task. For addressing all these issues with a solid theoretical backbone, we propose a novel transfer learning method based on a Bayesian framework. We propose a Bayesian transfer learning framework, where the source and target domains are related through the joint prior distribution of the model parameters. The modeling of joint prior densities enables better understanding of the transferability between domains. Using such an idea, we propose optimal Bayesian transfer learning (OBTL) for both continuous and count data as well as optimal Bayesian transfer regression (OBTR), which are able to optimally transfer the relevant knowledge from a data-rich source domain to a data-poor target domain, whereby improving the classification accuracy in the target domain with limited data

    Optimal Bayesian Transfer Learning for Classification and Regression

    Get PDF
    Machine learning methods and algorithms working under the assumption of identically and independently distributed (i.i.d.) data cannot be applicable when dealing with massive data collected from different sources or by various technologies, where heterogeneity of data is inevitable. In such scenarios where we are far from simple homogeneous and uni-modal distributions, we should address the data heterogeneity in a smart way in order to take the best advantages of data coming from different sources. In this dissertation we study two main sources of data heterogeneity, time and domain. We address the time by modeling the dynamics of data and the domain difference by transfer learning. Gene expression data have been used for many years for phenotype classification, for instance, classification of healthy versus cancerous tissues or classification of various types of diseases. The traditional methods use static gene expression data measured in one time point. We propose to take into account the dynamics of gene interactions through time, which can be governed by gene regulatory networks (GRN), and design the classifiers using gene expression trajectories instead of static data. Thanks to recent advanced sequencing technologies such as single-cell, we are now able to look inside a single cell and capture the dynamics of gene expressions. As a result, we design optimal classifiers using single-cell gene expression trajectories, whose dynamics are modeled via Boolean networks with perturbation (BNp). We solve this problem using both expectation maximization (EM) and Bayesian framework and show the great efficacy of these methods over classification via bulk RNA-Seq data. Transfer learning (TL) has recently attracted significant research attention, as it simultaneously learns from different source domains, which have plenty of labeled data, and transfers the relevant knowledge to the target domain with limited labeled data to improve the prediction performance. We study transfer learning with a novel Bayesian viewpoint. Transfer learning appears where we do not have enough data in our target domain to train the machine learning algorithms well but have good amount of data in other relevant source domains. The probability distributions of the source and target domains might be totally different but they share some knowledge underlying the similar tasks between the domains and are related to each other in some sense. The ultimate goal of transfer learning is to find the amount of relatedness between the domains and then transfer the amount of knowledge to the target domain which can help improve the classification task in the data-poor target domain. Negative transfer is the most vital issue in transfer learning and happens when the TL algorithm is not able to detect that the source domain is not related to the target domain for a specific task. For addressing all these issues with a solid theoretical backbone, we propose a novel transfer learning method based on a Bayesian framework. We propose a Bayesian transfer learning framework, where the source and target domains are related through the joint prior distribution of the model parameters. The modeling of joint prior densities enables better understanding of the transferability between domains. Using such an idea, we propose optimal Bayesian transfer learning (OBTL) for both continuous and count data as well as optimal Bayesian transfer regression (OBTR), which are able to optimally transfer the relevant knowledge from a data-rich source domain to a data-poor target domain, whereby improving the classification accuracy in the target domain with limited data

    Hyperparameter Learning via Distributional Transfer

    Full text link
    Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial exploration even in cases where similar prior tasks have been solved. We propose to transfer information across tasks using learnt representations of training datasets used in those tasks. This results in a joint Gaussian process model on hyperparameters and data representations. Representations make use of the framework of distribution embeddings into reproducing kernel Hilbert spaces. The developed method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective

    A PAC-Bayesian bound for Lifelong Learning

    Full text link
    Transfer learning has received a lot of attention in the machine learning community over the last years, and several effective algorithms have been developed. However, relatively little is known about their theoretical properties, especially in the setting of lifelong learning, where the goal is to transfer information to tasks for which no data have been observed so far. In this work we study lifelong learning from a theoretical perspective. Our main result is a PAC-Bayesian generalization bound that offers a unified view on existing paradigms for transfer learning, such as the transfer of parameters or the transfer of low-dimensional representations. We also use the bound to derive two principled lifelong learning algorithms, and we show that these yield results comparable with existing methods.Comment: to appear at ICML 201
    • …
    corecore