4,149 research outputs found

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Parameter-Free Extreme Learning Machine for Imbalanced Classification

    Get PDF
    CAUL read and publish agreement 2022Publishe

    Financial Soundness Prediction Using a Multi-classification Model: Evidence from Current Financial Crisis in OECD Banks

    Get PDF
    The paper aims to develop an early warning model that separates previously rated banks (337 Fitch-rated banks from OECD) into three classes, based on their financial health and using a one-year window. The early warning system is based on a classification model which estimates the Fitch ratings using Bankscope bankspecific data, regulatory and macroeconomic data as input variables. The authors propose a “hybridization technique” that combines the Extreme learning machine and the Synthetic Minority Over-sampling Technique. Due to the imbalanced nature of the problem, the authors apply an oversampling technique on the data aiming to improve the classification results on the minority groups. The methodology proposed outperforms other existing classification techniques used to predict bank solvency. It proved essential in improving average accuracy and especially the performance of the minority groups

    Diversified Ensemble Classifiers for Highly Imbalanced Data Learning and their Application in Bioinformatics

    Get PDF
    In this dissertation, the problem of learning from highly imbalanced data is studied. Imbalance data learning is of great importance and challenge in many real applications. Dealing with a minority class normally needs new concepts, observations and solutions in order to fully understand the underlying complicated models. We try to systematically review and solve this special learning task in this dissertation.We propose a new ensemble learning framework—Diversified Ensemble Classifiers for Imbal-anced Data Learning (DECIDL), based on the advantages of existing ensemble imbalanced learning strategies. Our framework combines three learning techniques: a) ensemble learning, b) artificial example generation, and c) diversity construction by reversely data re-labeling. As a meta-learner, DECIDL utilizes general supervised learning algorithms as base learners to build an ensemble committee. We create a standard benchmark data pool, which contains 30 highly skewed sets with diverse characteristics from different domains, in order to facilitate future research on imbalance data learning. We use this benchmark pool to evaluate and compare our DECIDL framework with several ensemble learning methods, namely under-bagging, over-bagging, SMOTE-bagging, and AdaBoost. Extensive experiments suggest that our DECIDL framework is comparable with other methods. The data sets, experiments and results provide a valuable knowledge base for future research on imbalance learning. We develop a simple but effective artificial example generation method for data balancing. Two new methods DBEG-ensemble and DECIDL-DBEG are then designed to improve the power of imbalance learning. Experiments show that these two methods are comparable to the state-of-the-art methods, e.g., GSVM-RU and SMOTE-bagging. Furthermore, we investigate learning on imbalanced data from a new angle—active learning. By combining active learning with the DECIDL framework, we show that the newly designed Active-DECIDL method is very effective for imbalance learning, suggesting the DECIDL framework is very robust and flexible.Lastly, we apply the proposed learning methods to a real-world bioinformatics problem—protein methylation prediction. Extensive computational results show that the DECIDL method does perform very well for the imbalanced data mining task. Importantly, the experimental results have confirmed our new contributions on this particular data learning problem

    Fault Detection and Diagnosis with Imbalanced and Noisy Data: A Hybrid Framework for Rotating Machinery

    Full text link
    Fault diagnosis plays an essential role in reducing the maintenance costs of rotating machinery manufacturing systems. In many real applications of fault detection and diagnosis, data tend to be imbalanced, meaning that the number of samples for some fault classes is much less than the normal data samples. At the same time, in an industrial condition, accelerometers encounter high levels of disruptive signals and the collected samples turn out to be heavily noisy. As a consequence, many traditional Fault Detection and Diagnosis (FDD) frameworks get poor classification performances when dealing with real-world circumstances. Three main solutions have been proposed in the literature to cope with this problem: (1) the implementation of generative algorithms to increase the amount of under-represented input samples, (2) the employment of a classifier being powerful to learn from imbalanced and noisy data, (3) the development of an efficient data pre-processing including feature extraction and data augmentation. This paper proposes a hybrid framework which uses the three aforementioned components to achieve an effective signal-based FDD system for imbalanced conditions. Specifically, it first extracts the fault features, using Fourier and wavelet transforms to make full use of the signals. Then, it employs Wasserstein Generative Adversarial Networks (WGAN) to generate synthetic samples to populate the rare fault class and enhance the training set. Moreover, to achieve a higher performance a novel combination of Convolutional Long Short-term Memory (CLSTM) and Weighted Extreme Learning Machine (WELM) is proposed. To verify the effectiveness of the developed framework, different datasets settings on different imbalance severities and noise degrees were used. The comparative results demonstrate that in different scenarios GAN-CLSTM-ELM outperforms the other state-of-the-art FDD frameworks.Comment: 23 pages, 11 figure

    Efficient Fraud Detection in Ethereum Blockchain through Machine Learning and Deep Learning Approaches

    Get PDF
    Background: This paper tackles the critical challenge of detecting fraudulent transactions within the Ethereum blockchain using machine learning techniques. With the burgeoning importance of blockchain, ensuring its security against fraudulent activities is crucial to prevent significant monetary losses. We utilized a public dataset comprising 9,841 Ethereum transactions, characterized by attributes such as gas price, transaction fee, and timestamp.Methods: Our approach is bifurcated into two core phases: data preprocessing and predictive modeling. In the data preprocessing phase, we meticulously process the dataset and extract pivotal features from transactions, setting the stage for efficient predictive modeling.Findings: For predictive modeling, we employed several machine learning algorithms to discern between fraudulent and legitimate transactions. Our evaluation encompassed algorithms like decision trees, logistic regression, gradient boosting, XGBoost, and an innovative hybrid model that melds random forests with deep neural networks (DNN).Novelty: Our findings underscore that the proposed model boasts a precision rate of 97.16%, marking a substantial leap in fraudulent transaction detection on the Ethereum blockchain in comparison to prevailing methodologies. This paper augments the current efforts aimed at bolstering the security of blockchain transactions using sophisticated analytical strategies.

    Big Data Supervised Pairwise Ortholog Detection in Yeasts

    Get PDF
    Ortholog are genes in different species, evolving from a common ancestor. Ortholog detection is essential to study phylogenies and to predict the function of unknown genes. The scalability of gene (or protein) pairwise comparisons and that of the classification process constitutes a challenge due to the ever-increasing amount of sequenced genomes. Ortholog detection algorithms, just based on sequence similarity, tend to fail in classification, specifically, in Saccharomycete yeasts with rampant paralogies and gene losses. In this book chapter, a new classification approach has been proposed based on the combination of pairwise similarity measures in a decision system that consider the extreme imbalance between ortholog and non-ortholog pairs. Some new gene pair similarity measures are defined based on protein physicochemical profiles, gene pair membership to conserved regions in related genomes, and protein lengths. The efficiency and scalability of the calculation of these measures are analyzed to propose its implementation for big data. In conclusion, evaluated supervised algorithms that manage big and imbalanced data showed high effectiveness in Saccharomycete yeast genomes
    corecore