7,346 research outputs found

    Multivariate Correlation Analysis for Supervised Feature Selection in High-Dimensional Data

    Get PDF
    The main theme of this dissertation focuses on multivariate correlation analysis on different data types and we identify and define various research gaps in the same. For the defined research gaps we develop novel techniques that address relevance of features to the target and redundancy of features amidst themselves. Our techniques aim at handling homogeneous data, i.e., only continuous or categorical features, mixed data, i.e., continuous and categorical features, and time serie

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Relevance, Redundancy and Complementarity Trade-off (RRCT):a Principled, Generic, Robust Feature Selection Tool

    Get PDF
    We present a new heuristic feature-selection (FS) algorithm that integrates in a principled algorithmic framework the three key FS components: relevance, redundancy, and complementarity. Thus, we call it relevance, redundancy, and complementarity trade-off (RRCT). The association strength between each feature and the response and between feature pairs is quantified via an information theoretic transformation of rank correlation coefficients, and the feature complementarity is quantified using partial correlation coefficients. We empirically benchmark the performance of RRCT against 19 FS algorithms across four synthetic and eight real-world datasets in indicative challenging settings evaluating the following: (1) matching the true feature set and (2) out-of-sample performance in binary and multi-class classification problems when presenting selected features into a random forest. RRCT is very competitive in both tasks, and we tentatively make suggestions on the generalizability and application of the best-performing FS algorithms across settings where they may operate effectively

    Effective Discriminative Feature Selection with Non-trivial Solutions

    Full text link
    Feature selection and feature transformation, the two main ways to reduce dimensionality, are often presented separately. In this paper, a feature selection method is proposed by combining the popular transformation based dimensionality reduction method Linear Discriminant Analysis (LDA) and sparsity regularization. We impose row sparsity on the transformation matrix of LDA through 2,1{\ell}_{2,1}-norm regularization to achieve feature selection, and the resultant formulation optimizes for selecting the most discriminative features and removing the redundant ones simultaneously. The formulation is extended to the 2,p{\ell}_{2,p}-norm regularized case: which is more likely to offer better sparsity when 0<p<10<p<1. Thus the formulation is a better approximation to the feature selection problem. An efficient algorithm is developed to solve the 2,p{\ell}_{2,p}-norm based optimization problem and it is proved that the algorithm converges when 0<p20<p\le 2. Systematical experiments are conducted to understand the work of the proposed method. Promising experimental results on various types of real-world data sets demonstrate the effectiveness of our algorithm

    Bioinformatics applied to human genomics and proteomics: development of algorithms and methods for the discovery of molecular signatures derived from omic data and for the construction of co-expression and interaction networks

    Get PDF
    [EN] The present PhD dissertation develops and applies Bioinformatic methods and tools to address key current problems in the analysis of human omic data. This PhD has been organised by main objectives into four different chapters focused on: (i) development of an algorithm for the analysis of changes and heterogeneity in large-scale omic data; (ii) development of a method for non-parametric feature selection; (iii) integration and analysis of human protein-protein interaction networks and (iv) integration and analysis of human co-expression networks derived from tissue expression data and evolutionary profiles of proteins. In the first chapter, we developed and tested a new robust algorithm in R, called DECO, for the discovery of subgroups of features and samples within large-scale omic datasets, exploring all feature differences possible heterogeneity, through the integration of both data dispersion and predictor-response information in a new statistic parameter called h (heterogeneity score). In the second chapter, we present a simple non-parametric statistic to measure the cohesiveness of categorical variables along any quantitative variable, applicable to feature selection in all types of big data sets. In the third chapter, we describe an analysis of the human interactome integrating two global datasets from high-quality proteomics technologies: HuRI (a human protein-protein interaction network generated by a systematic experimental screening based on Yeast-Two-Hybrid technology) and Cell-Atlas (a comprehensive map of subcellular localization of human proteins generated by antibody imaging). This analysis aims to create a framework for the subcellular localization characterization supported by the human protein-protein interactome. In the fourth chapter, we developed a full integration of three high-quality proteome-wide resources (Human Protein Atlas, OMA and TimeTree) to generate a robust human co-expression network across tissues assigning each human protein along the evolutionary timeline. In this way, we investigate how old in evolution and how correlated are the different human proteins, and we place all them in a common interaction network. As main general comment, all the work presented in this PhD uses and develops a wide variety of bioinformatic and statistical tools for the analysis, integration and enlighten of molecular signatures and biological networks using human omic data. Most of this data corresponds to sample cohorts generated in recent biomedical studies on specific human diseases

    Type-2 fuzzy logic system applications for power systems

    Get PDF
    PhD ThesisIn the move towards ubiquitous information & communications technology, an opportunity for further optimisation of the power system as a whole has arisen. Nonetheless, the fast growth of intermittent generation concurrently with markets deregulation is driving a need for timely algorithms that can derive value from these new data sources. Type-2 fuzzy logic systems can offer approximate solutions to these computationally hard tasks by expressing non-linear relationships in a more flexible fashion. This thesis explores how type-2 fuzzy logic systems can provide solutions to two of these challenging power system problems; short-term load forecasting and voltage control in distribution networks. On one hand, time-series forecasting is a key input for economic secure power systems as there are many tasks that require a precise determination of the future short-term load (e.g. unit commitment or security assessment among others), but also when dealing with electricity as commodity. As a consequence, short-term load forecasting becomes essential for energy stakeholders and any inaccuracy can be directly translated into their financial performance. All these is reflected in current power systems literature trends where a significant number of papers cover the subject. Extending the existing literature, this work focuses in how these should be implemented from beginning to end to bring to light their predictive performance. Following this research direction, this thesis introduces a novel framework to automatically design type-2 fuzzy logic systems. On the other hand, the low-carbon economy is pushing the grid status even closer to its operational limits. Distribution networks are becoming active systems with power flows and voltages defined not only by load, but also by generation. As consequence, even if it is not yet absolutely clear how power systems will evolve in the long-term, all plausible future scenarios claim for real-time algorithms that can provide near optimal solutions to this challenging mixed-integer non-linear problem. Aligned with research and industry efforts, this thesis introduces a scalable implementation to tackle this task in divide-and-conquer fashio
    corecore