24 research outputs found

    Probabilistic analysis of the human transcriptome with side information

    Get PDF
    Understanding functional organization of genetic information is a major challenge in modern biology. Following the initial publication of the human genome sequence in 2001, advances in high-throughput measurement technologies and efficient sharing of research material through community databases have opened up new views to the study of living organisms and the structure of life. In this thesis, novel computational strategies have been developed to investigate a key functional layer of genetic information, the human transcriptome, which regulates the function of living cells through protein synthesis. The key contributions of the thesis are general exploratory tools for high-throughput data analysis that have provided new insights to cell-biological networks, cancer mechanisms and other aspects of genome function. A central challenge in functional genomics is that high-dimensional genomic observations are associated with high levels of complex and largely unknown sources of variation. By combining statistical evidence across multiple measurement sources and the wealth of background information in genomic data repositories it has been possible to solve some the uncertainties associated with individual observations and to identify functional mechanisms that could not be detected based on individual measurement sources. Statistical learning and probabilistic models provide a natural framework for such modeling tasks. Open source implementations of the key methodological contributions have been released to facilitate further adoption of the developed methods by the research community.Comment: Doctoral thesis. 103 pages, 11 figure

    Latent Representation and Sampling in Network: Application in Text Mining and Biology.

    Get PDF
    In classical machine learning, hand-designed features are used for learning a mapping from raw data. However, human involvement in feature design makes the process expensive. Representation learning aims to learn abstract features directly from data without direct human involvement. Raw data can be of various forms. Network is one form of data that encodes relational structure in many real-world domains. Therefore, learning abstract features for network units is an important task. In this dissertation, we propose models for incorporating temporal information given as a collection of networks from subsequent time-stamps. The primary objective of our models is to learn a better abstract feature representation of nodes and edges in an evolving network. We show that the temporal information in the abstract feature improves the performance of link prediction task substantially. Besides applying to the network data, we also employ our models to incorporate extra-sentential information in the text domain for learning better representation of sentences. We build a context network of sentences to capture extra-sentential information. This information in abstract feature representation of sentences improves various text-mining tasks substantially over a set of baseline methods. A problem with the abstract features that we learn is that they lack interpretability. In real-life applications on network data, for some tasks, it is crucial to learn interpretable features in the form of graphical structures. For this we need to mine important graphical structures along with their frequency statistics from the input dataset. However, exact algorithms for these tasks are computationally expensive, so scalable algorithms are of urgent need. To overcome this challenge, we provide efficient sampling algorithms for mining higher-order structures from network(s). We show that our sampling-based algorithms are scalable. They are also superior to a set of baseline algorithms in terms of retrieving important graphical sub-structures, and collecting their frequency statistics. Finally, we show that we can use these frequent subgraph statistics and structures as features in various real-life applications. We show one application in biology and another in security. In both cases, we show that the structures and their statistics significantly improve the performance of knowledge discovery tasks in these domains

    Learning from Structured Data with High Dimensional Structured Input and Output Domain

    Get PDF
    Structured data is accumulated rapidly in many applications, e.g. Bioinformatics, Cheminformatics, social network analysis, natural language processing and text mining. Designing and analyzing algorithms for handling these large collections of structured data has received significant interests in data mining and machine learning communities, both in the input and output domain. However, it is nontrivial to adopt traditional machine learning algorithms, e.g. SVM, linear regression to structured data. For one thing, the structural information in the input domain and output domain is ignored if applying the normal algorithms to structured data. For another, the major challenge in learning from many high-dimensional structured data is that input/output domain can contain tens of thousands even larger number of features and labels. With the high dimensional structured input space and/or structured output space, learning a low dimensional and consistent structured predictive function is important for both robustness and interpretability of the model. In this dissertation, we will present a few machine learning models that learn from the data with structured input features and structured output tasks. For learning from the data with structured input features, I have developed structured sparse boosting for graph classification, structured joint sparse PCA for anomaly detection and localization. Besides learning from structured input, I also investigated the interplay between structured input and output under the context of multi-task learning. In particular, I designed a multi-task learning algorithms that performs structured feature selection & task relationship Inference. We will demonstrate the applications of these structured models on subgraph based graph classification, networked data stream anomaly detection/localization, multiple cancer type prediction, neuron activity prediction and social behavior prediction. Finally, through my intern work at IBM T.J. Watson Research, I will demonstrate how to leverage structural information from mobile data (e.g. call detail record and GPS data) to derive important places from people's daily life for transit optimization and urban planning

    Topics in Network Analysis with Applications to Brain Connectomics

    Full text link
    Large complex network data have become common in many scientific domains, and require new statistical tools for discovering the underlying structures and features of interest. This thesis presents new methodology for network data analysis, with a focus on problems arising in the field of brain connectomics. Our overall goal is to learn parsimonious and interpretable network features, with computationally efficient and theoretically justified methods. The first project in the thesis focuses on prediction with network covariates. This setting is motivated by neuroimaging applications, in which each subject has an associated brain network constructed from fMRI data, and the goal is to derive interpretable prediction rules for a phenotype of interest or a clinical outcome. Existing approaches to this problem typically either reduce the data to a small set of global network summaries, losing a lot of local information, or treat network edges as a ``bag of features'' and use standard statistical tools without accounting for the network nature of the data. We develop a method that uses all edge weights, while still effectively incorporating network structure by using a penalty that encourages sparsity in both the number of edges and the number of nodes used. We develop efficient optimization algorithms for implementing this method and show it achieves state-of-the-art accuracy on a dataset of schizophrenic patients and healthy controls while using a smaller and more readily interpretable set of features than methods which ignore network structure. We also establish theoretical performance guarantees. Communities in networks are observed in many different domains, and in brain networks they typically correspond to regions of the brain responsible for different functions. In connectomic analyses, there are standard parcellations of the brain into such regions, typically obtained by applying clustering methods to brain connectomes of healthy subjects. However, there is now increasing evidence that these communities are dynamic, and when the goal is predicting a phenotype or distinguishing between different conditions, these static communities from an unrelated set of healthy subjects may not be the most useful for prediction. We present a method for supervised community detection, that is, a method that finds a partition of the network into communities that is most useful for predicting a particular response. We use a block-structured regularization and compute the solution with a combination of a spectral method and an ADMM optimization algorithm. The method performs well on both simulated and real brain networks, providing support for the idea of task-dependent brain regions. The last part of the thesis focuses on the problem of community detection in the general network setting. Unlike in neuroimaging, statistical network analysis is typically applied to a single network, motivated by datasets from the social sciences. While community detection has been well studied, in practice nodes in a network often belong to more than one community, leading to the much harder problem of overlapping community detection. We propose a new approach for overlapping community detection based on sparse principal component analysis, and develop efficient algorithms that are able to accurately recover community memberships, provided each node does not belong to too many communities at once. The method has a very low computational cost relative to other methods available for this problem. We show asymptotic consistency of recovering community memberships by the new method, and good empirical performance on both simulated and real-world networks.PHDStatisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145883/1/jarroyor_1.pd

    Machine Learning Methods with Noisy, Incomplete or Small Datasets

    Get PDF
    In many machine learning applications, available datasets are sometimes incomplete, noisy or affected by artifacts. In supervised scenarios, it could happen that label information has low quality, which might include unbalanced training sets, noisy labels and other problems. Moreover, in practice, it is very common that available data samples are not enough to derive useful supervised or unsupervised classifiers. All these issues are commonly referred to as the low-quality data problem. This book collects novel contributions on machine learning methods for low-quality datasets, to contribute to the dissemination of new ideas to solve this challenging problem, and to provide clear examples of application in real scenarios

    Advances in knowledge discovery and data mining Part II

    Get PDF
    19th Pacific-Asia Conference, PAKDD 2015, Ho Chi Minh City, Vietnam, May 19-22, 2015, Proceedings, Part II</p

    Dimensionality reduction methods for microarray cancer data using prior knowledge

    No full text
    Microarray studies are currently a very popular source of biological information. They allow the simultaneous measurement of hundreds of thousands of genes, drastically increasing the amount of data that can be gathered in a small amount of time and also decreasing the cost of producing such results. Large numbers of high dimensional data sets are currently being generated and there is an ongoing need to find ways to analyse them to obtain meaningful interpretations. Many microarray experiments are concerned with answering specific biological or medical questions regarding diseases and treatments. Cancer is one of the most popular research areas and there is a plethora of data available requiring in depth analysis. Although the analysis of microarray data has been thoroughly researched over the past ten years, new approaches still appear regularly, and may lead to a better understanding of the available information. The size of the modern data sets presents considerable difficulties to traditional methodologies based on hypothesis testing, and there is a new move towards the use of machine learning in microarray data analysis. Two new methods of using prior genetic knowledge in machine learning algorithms have been developed and their results are compared with existing methods. The prior knowledge consists of biological pathway data that can be found in on-line databases, and gene ontology terms. The first method, called ``a priori manifold learning'' uses the prior knowledge when constructing a manifold for non-linear feature extraction. It was found to perform better than both linear principal components analysis (PCA) and the non-linear Isomap algorithm (without prior knowledge) in both classification accuracy and quality of the clusters. Both pathway and GO terms were used as prior knowledge, and results showed that using GO terms can make the models over-fit the data. In the cases where the use of GO terms does not over-fit, the results are better than PCA, Isomap and a priori manifold learning using pathways. The second method, called ``the feature selection over pathway segmentation algorithm'', uses the pathway information to split a big dataset into smaller ones. Then, using AdaBoost, decision trees are constructed for each of the smaller sets and the sets that achieve higher classification accuracy are identified. The individual genes in these subsets are assessed to determine their role in the classification process. Using data sets concerning chronic myeloid leukaemia (CML) two subsets based on pathways were found to be strongly associated with the response to treatment. Using a different data set from measurements on lower grade glioma (LGG) tumours, four informative gene sets were discovered. Further analysis based on the Gini importance measure identified a set of genes for each cancer type (CML, LGG) that could predict the response to treatment very accurately (> 90%). Moreover a single gene that can predict the response to CML treatment accurately was identified.Open Acces

    Quantitative diffusion MRI with application to multiple sclerosis

    Get PDF
    Diffusion MRI (dMRI) is a uniquely non-invasive probe of biological tissue properties, increasingly able to provide access to ever more intricate structural and microstructural tissue information. Imaging biomarkers that reveal pathological alterations can help advance our knowledge of complex neurological disorders such as multiple sclerosis (MS), but depend on both high quality image data and robust post-processing pipelines. The overarching aim of this thesis was to develop methods to improve the characterisation of brain tissue structure and microstructure using dMRI. Two distinct avenues were explored. In the first approach, network science and graph theory were used to identify core human brain networks with improved sensitivity to subtle pathological damage. A novel consensus subnetwork was derived using graph partitioning techniques to select nodes based on independent measures of centrality, and was better able to explain cognitive impairment in relapsing-remitting MS patients than either full brain or default mode networks. The influence of edge weighting scheme on graph characteristics was explored in a separate study, which contributes to the connectomics field by demonstrating how study outcomes can be affected by an aspect of network design often overlooked. The second avenue investigated the influence of image artefacts and noise on the accuracy and precision of microstructural tissue parameters. Correction methods for the echo planar imaging (EPI) Nyquist ghost artefact were systematically evaluated for the first time in high b-value dMRI, and the outcomes were used to develop a new 2D phase-corrected reconstruction framework with simultaneous channel-wise noise reduction appropriate for dMRI. The technique was demonstrated to alleviate biases associated with Nyquist ghosting and image noise in dMRI biomarkers, but has broader applications in other imaging protocols that utilise the EPI readout. I truly hope the research in this thesis will influence and inspire future work in the wider MR community
    corecore