570 research outputs found

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Deep Learning Based Reliability Models For High Dimensional Data

    Get PDF
    The reliability estimation of products has crucial applications in various industries, particularly in current competitive markets, as it has high economic impacts. Hence, reliability analysis and failure prediction are receiving increasing attention. Reliability models based on lifetime data have been developed for different modern applications. These models are able to predict failure by incorporating the influence of covariates on time-to-failure. The covariates are factors that affect the subjects’ lifetime. Modern technologies generate covariates which can be utilized to improve failure time prediction. However, there are several challenges to incorporate the covariates into reliability models. First, the covariates generally are high dimensional and topologically complex. Second, the existing reliability models are not efficient in modeling the effect on the complex covariates on failure time. Third, failure time information may not be available for all covariates, as collecting such information is a costly and time-consuming process. To overcome the first challenge, we propose a statistical approach to model the complex data. The proposed model generalizes penalized logistic regression to capture the spatial properties of the data. An efficient parameter estimation method is developed to make the model practical in case of large sample sizes. To tackle the second challenge, a deep learning-based reliability model is proposed. The model can capture the complex effect of the data on failure time. A novel loss function based on the partial likelihood function is developed to train the deep learning model. Furthermore, to overcome the third difficulty, we proposed a transfer learning-based reliability model to estimate failure time based on the failure time of similar covariates. The proposed model is based on a two-level autoencoder to minimize the distribution distance of covariates. A new parameter estimation method is developed to estimate the parameter of the proposed two-level autoencoder model. Various simulation studies are conducted to demonstrate the proposed models. The results show that the proposed models outperformed the traditional statistical and reliability models. Moreover, physical experiments on advanced high strength steel are designed to demonstrate the proposed model. As microstructure images of the steels affect the failure time of the steel, the images are considered as covariates. The results show that the proposed models predict the failure time and hazard function of the materials more accurately than existing reliability models

    Bayesian Approaches For Modeling Variation

    Get PDF
    A core focus of statistics is determining how much of the variation in data may be attributed to the signal of interest, and how much to noise. When the sources of variation are many and complex, a Bayesian approach to data analysis offers a number of advantages. In this thesis, we propose and implement new Bayesian methods for modeling variation in two general settings. The first setting is high-dimensional linear regression where the unknown error variance is also of interest. Here, we show that a commonly used class of conjugate shrinkage priors can lead to underestimation of the error variance. We then extend the Spike-and-Slab Lasso (SSL, Rockova and George, 2018) to the unknown variance case, using an alternative, independent prior framework. This extended procedure outperforms both the fixed variance approach and alternative penalized likelihood methods on both simulated and real data. For the second setting, we move from univariate response data where the predictors are known, to multivariate response data in which potential predictors are unobserved. In this setting, we first consider the problem of biclustering, where a motivating example is to find subsets of genes which have similar expression in a subset of patients. For this task, we propose a new biclustering method called Spike-and-Slab Lasso Biclustering (SSLB). SSLB utilizes the SSL prior to find a doubly-sparse factorization of the data matrix via a fast EM algorithm. Applied to both a microarray dataset and a single-cell RNA-sequencing dataset, SSLB recovers biologically meaningful signal in the data. The second problem we consider in this setting is nonlinear factor analysis. The goal here is to find low-dimensional, unobserved ``factors\u27\u27 which drive the variation in the high-dimensional observed data in a potentially nonlinear fashion. For this purpose, we develop factor analysis BART (faBART), an MCMC algorithm which alternates sampling from the posterior of (a) the factors and (b) a functional approximation to the mapping from the factors to the data. The latter step utilizes Bayesian Additive Regression Trees (BART, Chipman et al., 2010). On a variety of simulation settings, we demonstrate that with only the observed data as the input, faBART is able to recover both the unobserved factors and the nonlinear mapping

    Learning multiple views with orthogonal denoising autoencoders

    Get PDF
    Multi-view learning techniques are necessary when data is described by multiple distinct feature sets because single-view learning algorithms tend to overt on these high-dimensional data. Prior successful approaches followed either consensus or complementary principles. Recent work has focused on learning both the shared and private latent spaces of views in order to take advantage of both principles. However, these methods can not ensure that the latent spaces are strictly independent through encouraging the orthogonality in their objective functions. Also little work has explored representation learning techniques for multiview learning. In this paper, we use the denoising autoencoder to learn shared and private latent spaces, with orthogonal constraints | disconnecting every private latent space from the remaining views. Instead of computationally expensive optimization, we adapt the backpropagation algorithm to train our model

    A Unifying review of linear gaussian models

    Get PDF
    Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model.We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models
    corecore