142 research outputs found

    Machine Learning and Integrative Analysis of Biomedical Big Data.

    Get PDF
    Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues

    Prediction of the transaction confirmation time in Ethereum Blockchain

    Full text link
    La blockchain propose un système d'enregistrement décentralisé, immuable et transparent. Elle offre un réseau de nœuds sans entité de gouvernance centralisée, ce qui la rend "indéchiffrable" et donc plus sûr que le système d'enregistrement centralisé sur papier ou centralisé telles que les banques. L’approche traditionnelle basée sur l’enregistrement ne fonctionne pas bien avec les relations numériques où les données changent constamment. Contrairement aux canaux traditionnels, régis par des entités centralisées, blockchain offre à ses utilisateurs un certain niveau d'anonymat en leur permettant d'interagir sans divulguer leur identité personnelle et en leur permettant de gagner la confiance sans passer par une entité tierce. En raison des caractéristiques susmentionnées de la blockchain, de plus en plus d'utilisateurs dans le monde sont enclins à effectuer une transaction numérique via blockchain plutôt que par des canaux rudimentaires. Par conséquent, nous devons de toute urgence mieux comprendre comment ces opérations sont gérées par la blockchain et combien de temps cela prend à un nœud du réseau pour confirmer une transaction et l’ajouter au réseau de la blockchain. Dans cette thèse, nous visons à introduire une nouvelle approche qui permettrait d'estimer le temps il faudrait à un nœud de la blockchain Ethereum pour accepter et confirmer une transaction sur un bloc tout en utilisant l'apprentissage automatique. Nous explorons deux des approches les plus fondamentales de l’apprentissage automatique, soit la classification et la régression, afin de déterminer lequel des deux offrirait l’outil le plus efficace pour effectuer la prévision du temps de confirmation dans la blockchain Ethereum. Nous explorons le classificateur Naïve Bayes, le classificateur Random Forest et le classificateur Multilayer Perceptron pour l’approche de la classification. Comme la plupart des transactions sur Ethereum sont confirmées dans le délai de confirmation moyen (15 secondes) de deux confirmations de bloc, nous discutons également des moyens pour résoudre le problème asymétrique du jeu de données rencontré avec l’approche de la classification. Nous visons également à comparer la précision prédictive de deux modèles de régression d’apprentissage automatique, soit le Random Forest Regressor et le Multilayer Perceptron, par rapport à des modèles de régression statistique, précédemment proposés, avec un critère d’évaluation défini, afin de déterminer si l’apprentissage automatique offre un modèle prédictif plus précis que les modèles statistiques conventionnels.Blockchain offers a decentralized, immutable, transparent system of records. It offers a peer-to-peer network of nodes with no centralised governing entity making it ‘unhackable’ and therefore, more secure than the traditional paper based or centralised system of records like banks etc. While there are certain advantages to the paper based recording approach, it does not work well with digital relationships where the data is in constant flux. Unlike traditional channels, governed by centralized entities, blockchain offers its users a certain level of anonymity by providing capabilities to interact without disclosing their personal identities and allows them to build trust without a third-party governing entity. Due to the aforementioned characteristics of blockchain, more and more users around the globe are inclined towards making a digital transaction via blockchain than via rudimentary channels. Therefore, there is a dire need for us to gain insight on how these transactions are processed by the blockchain and how much time it may take for a peer to confirm a transaction and add it to the blockchain network. In this thesis, we aim to introduce a novel approach that would allow one to estimate the time (in block time or otherwise) it would take for Ethereum Blockchain to accept and confirm a transaction to a block using machine learning. We explore two of the most fundamental machine learning approaches, i.e., Classification and Regression in order to determine which of the two would be more accurate to make confirmation time prediction in the Ethereum blockchain. More specifically, we explore Naïve Bayes classifier, Random Forest classifier and Multilayer Perceptron classifier for the classification approach. Since most transactions in the network are confirmed well within the average confirmation time of two block confirmations or 15 seconds, we also discuss ways to tackle the skewed dataset problem encountered in case of the classification approach. We also aim to compare the predictive accuracy of two machine learning regression models- Random Forest Regressor and Multilayer Perceptron against previously proposed statistical regression models under a set evaluation criterion; the objective is to determine whether machine learning offers a more accurate predictive model than conventional statistical models

    Click Fraud Detection in Online and In-app Advertisements: A Learning Based Approach

    Get PDF
    Click Fraud is the fraudulent act of clicking on pay-per-click advertisements to increase a site’s revenue, to drain revenue from the advertiser, or to inflate the popularity of content on social media platforms. In-app advertisements on mobile platforms are among the most common targets for click fraud, which makes companies hesitant to advertise their products. Fraudulent clicks are supposed to be caught by ad providers as part of their service to advertisers, which is commonly done using machine learning methods. However: (1) there is a lack of research in current literature addressing and evaluating the different techniques of click fraud detection and prevention, (2) threat models composed of active learning systems (smart attackers) can mislead the training process of the fraud detection model by polluting the training data, (3) current deep learning models have significant computational overhead, (4) training data is often in an imbalanced state, and balancing it still results in noisy data that can train the classifier incorrectly, and (5) datasets with high dimensionality cause increased computational overhead and decreased classifier correctness -- while existing feature selection techniques address this issue, they have their own performance limitations. By extending the state-of-the-art techniques in the field of machine learning, this dissertation provides the following solutions: (i) To address (1) and (2), we propose a hybrid deep-learning-based model which consists of an artificial neural network, auto-encoder and semi-supervised generative adversarial network. (ii) As a solution for (3), we present Cascaded Forest and Extreme Gradient Boosting with less hyperparameter tuning. (iii) To overcome (4), we propose a row-wise data reduction method, KSMOTE, which filters out noisy data samples both in the raw data and the synthetically generated samples. (iv) For (5), we propose different column-reduction methods such as multi-time-scale Time Series analysis for fraud forecasting, using binary labeled imbalanced datasets and hybrid filter-wrapper feature selection approaches

    SLA violation prediction : a machine learning perspective

    Get PDF
    Le cloud computing réduit les coûts de maintenance des services et permet aux utilisateurs d'accéder à la demande aux services sans devoir être impliqués dans des détails techniques d'implémentation. Le lien entre un fournisseur de services cloud et un client est régi par une Validation du Niveau Service (VNS) qui définit pour chaque service le niveau et le coût associé. La VNS contient habituellement des paramètres spécifiques et un niveau minimum de qualité pour chaque élément du service qui est négocié entre les deux parties. Cependant, une ou plusieurs des conditions convenues dans une VNS pourraient être violées en raison de plusieurs problèmes tels que des problèmes techniques occasionnels. Du point de vue d'apprentissage automatique, le problème de la prédiction de violation de la VNS équivaut à un problème de classification binaire. Nous avons exploré deux modèles de classification en apprentissage automatique lors de cette thèse. Il s’agit des modèles de classification de Bayes naïve et de Forêts Aléatoires afin de prédire des violations futures d’une certaine tâche utilisant ses traits caractéristiques. Comparativement aux travaux précédents sur la prédiction d'une violation de la VNS, nos modèles ont été entraînés sur des ensembles de données réels introduisant ainsi de nouveaux défis. Nous avons validé le tout en utilisant Google Cloud Cluster trace comme avec l’ensemble de données. Les violations de la VNS étant des évènements rares 2.2 %, leur classification automatique reste une tâche difficile. Un modèle de classification aura en effet une forte tendance à prédire la classe dominante au détriment des classes rares. Pour répondre à ce problème, il existe plusieurs méthodes de ré-échantillonages telles que Random Over-Sampling, Under-Sampling, SMOTH, NearMiss, One-sided Selection, Neighborhood Cleaning Rule. Il est donc possible de les combiner afin de ré-équilibrer le jeu de données.Cloud computing reduces the maintenance costs of services and allows users to access on demand services without being involved in technical implementation details. The relationship between a cloud provider and a customer is governed with a Service Level Agreement (SLA) that is established to define the level of the service and its associated costs. SLA usually contains specific parameters and a minimum level of quality for each element of the service that is negotiated between a cloud provider and a customer. However, one or more than one of the agreed terms in an SLA might be violated due to several issues such as occasional technical problems. Violations do happen in real world. In terms of availability, Amazon Elastic Cloud faced an outage in 2011 when it crashed and many large customers such as Reddit and Quora were down for more than one day. As SLA violation prediction benefits both user and cloud provider, in recent years, cloud researchers have started investigating models that are capable of prediction future violations. From a Machine Learning point of view, the problem of SLA violation prediction amounts to a binary classification problem. In this thesis, we explore two Machine Learning classification models: Naive Bayes and Random Forest to predict future violations using features of a submitted task. Unlike previous works on SLA violation prediction or avoidance, our models are trained on a real world dataset which introduces new challenges. We validate our models using Google Cloud Cluster trace as the dataset. Since SLA violations are rare events in real world 2.2 %, the classification task becomes more challenging because the classifier will always have the tendency to predict the dominant class. In order to overcome this issue, we use several re-sampling methods such as Random Over-Sampling, Under-Sampling, SMOTH, NearMiss, One-sided Selection, Neighborhood Cleaning Rule and an ensemble of them to re-balance the dataset

    Autoencoder-based techniques for improved classification in settings with high dimensional and small sized data

    Get PDF
    Neural network models have been widely tested and analysed usinglarge sized high dimensional datasets. In real world application prob-lems, the available datasets are often limited in size due to reasonsrelated to the cost or difficulties encountered while collecting the data.This limitation in the number of examples may challenge the clas-sification algorithms and degrade their performance. A motivatingexample for this kind of problem is predicting the health status of atissue given its gene expression, when the number of samples availableto learn from is very small.Gene expression data has distinguishing characteristics attracting themachine learning research community. The high dimensionality ofthe data is one of the integral features that has to be considered whenbuilding predicting models. A single sample of the data is expressedby thousands of gene expressions compared to the benchmark imagesand texts that only have a few hundreds of features and commonlyused for analysing the existing models. Gene expression data samplesare also distributed unequally among the classes; in addition, theyinclude noisy features which degrade the prediction accuracy of themodels. These characteristics give rise to the need for using effec-tive dimensionality reduction methods that are able to discover thecomplex relationships between the features such as the autoencoders. This thesis investigates the problem of predicting from small sizedhigh dimensional datasets by introducing novel autoencoder-basedtechniques to increase the classification accuracy of the data. Twoautoencoder-based methods for generating synthetic data examplesand synthetic representations of the data were respectively introducedin the first stage of the study. Both of these methods are applicableto the testing phase of the autoencoder and showed successful in in-creasing the predictability of the data.Enhancing the autoencoder’s ability in learning from small sized im-balanced data was investigated in the second stage of the projectto come up with techniques that improved the autoencoder’s gener-ated representations. Employing the radial basis activation mecha-nism used in radial-basis function networks, which learn in a super-vised manner, was a solution provided by this thesis to enhance therepresentations learned by unsupervised algorithms. This techniquewas later applied to stochastic variational autoencoders and showedpromising results in learning discriminating representations from thegene expression data.The contributions of this thesis can be described by a number of differ-ent methods applicable to different stages (training and testing) anddifferent autoencoder models (deterministic and stochastic) which, in-dividually, allow for enhancing the predictability of small sized highdimensional datasets compared to well known baseline methods

    BagStack Classification for Data Imbalance Problems with Application to Defect Detection and Labeling in Semiconductor Units

    Get PDF
    abstract: Despite the fact that machine learning supports the development of computer vision applications by shortening the development cycle, finding a general learning algorithm that solves a wide range of applications is still bounded by the ”no free lunch theorem”. The search for the right algorithm to solve a specific problem is driven by the problem itself, the data availability and many other requirements. Automated visual inspection (AVI) systems represent a major part of these challenging computer vision applications. They are gaining growing interest in the manufacturing industry to detect defective products and keep these from reaching customers. The process of defect detection and classification in semiconductor units is challenging due to different acceptable variations that the manufacturing process introduces. Other variations are also typically introduced when using optical inspection systems due to changes in lighting conditions and misalignment of the imaged units, which makes the defect detection process more challenging. In this thesis, a BagStack classification framework is proposed, which makes use of stacking and bagging concepts to handle both variance and bias errors. The classifier is designed to handle the data imbalance and overfitting problems by adaptively transforming the multi-class classification problem into multiple binary classification problems, applying a bagging approach to train a set of base learners for each specific problem, adaptively specifying the number of base learners assigned to each problem, adaptively specifying the number of samples to use from each class, applying a novel data-imbalance aware cross-validation technique to generate the meta-data while taking into account the data imbalance problem at the meta-data level and, finally, using a multi-response random forest regression classifier as a meta-classifier. The BagStack classifier makes use of multiple features to solve the defect classification problem. In order to detect defects, a locally adaptive statistical background modeling is proposed. The proposed BagStack classifier outperforms state-of-the-art image classification techniques on our dataset in terms of overall classification accuracy and average per-class classification accuracy. The proposed detection method achieves high performance on the considered dataset in terms of recall and precision.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Differential evolution technique on weighted voting stacking ensemble method for credit card fraud detection

    Get PDF
    Differential Evolution is an optimization technique of stochastic search for a population-based vector, which is powerful and efficient over a continuous space for solving differentiable and non-linear optimization problems. Weighted voting stacking ensemble method is an important technique that combines various classifier models. However, selecting the appropriate weights of classifier models for the correct classification of transactions is a problem. This research study is therefore aimed at exploring whether the Differential Evolution optimization method is a good approach for defining the weighting function. Manual and random selection of weights for voting credit card transactions has previously been carried out. However, a large number of fraudulent transactions were not detected by the classifier models. Which means that a technique to overcome the weaknesses of the classifier models is required. Thus, the problem of selecting the appropriate weights was viewed as the problem of weights optimization in this study. The dataset was downloaded from the Kaggle competition data repository. Various machine learning algorithms were used to weight vote a class of transaction. The differential evolution optimization techniques was used as a weighting function. In addition, the Synthetic Minority Oversampling Technique (SMOTE) and Safe Level Synthetic Minority Oversampling Technique (SL-SMOTE) oversampling algorithms were modified to preserve the definition of SMOTE while improving the performance. Result generated from this research study showed that the Differential Evolution Optimization method is a good weighting function, which can be adopted as a systematic weight function for weight voting stacking ensemble method of various classification methods.School of ComputingM. Sc. (Computing

    Advancing natural language processing in political science

    Get PDF

    Uncovering the Potential of Federated Learning: Addressing Algorithmic and Data-driven Challenges under Privacy Restrictions

    Get PDF
    Federated learning is a groundbreaking distributed machine learning paradigm that allows for the collaborative training of models across various entities without directly sharing sensitive data, ensuring privacy and robustness. This Ph.D. dissertation delves into the intricacies of federated learning, investigating the algorithmic and data-driven challenges of deep learning models in the presence of additive noise in this framework. The main objective is to provide strategies to measure the generalization, stability, and privacy-preserving capabilities of these models and further improve them. To this end, five noise infusion mechanisms at varying noise levels within centralized and federated learning settings are explored. As model complexity is a key component of the generalization and stability of deep learning models during training and evaluation, a comparative analysis of three Convolutional Neural Network (CNN) architectures is provided. A key contribution of this study is introducing specific metrics for training with noise. Signal-to-Noise Ratio (SNR) is introduced as a quantitative measure of the trade-off between privacy and training accuracy of noise-infused models, aiming to find the noise level that yields optimal privacy and accuracy. Moreover, the Price of Stability and Price of Anarchy are defined in the context of privacy-preserving deep learning, contributing to the systematic investigation of the noise infusion mechanisms to enhance privacy without compromising performance. This research sheds light on the delicate balance between these critical factors, fostering a deeper understanding of the implications of noise-based regularization in machine learning. The present study also explores a real-world application of federated learning in weather prediction applications that suffer from the issue of imbalanced datasets. Utilizing data from multiple sources combined with advanced data augmentation techniques improves the accuracy and generalization of weather prediction models, even when dealing with imbalanced datasets. Overall, federated learning is pivotal in harnessing decentralized datasets for real-world applications while safeguarding privacy. By leveraging noise as a tool for regularization and privacy enhancement, this research study aims to contribute to the development of robust, privacy-aware algorithms, ensuring that AI-driven solutions prioritize both utility and privacy
    corecore