301 research outputs found
Fraud detection for online banking for scalable and distributed data
Online fraud causes billions of dollars in losses for banks. Therefore, online banking fraud detection is an important field of study. However, there are many challenges in conducting research in fraud detection. One of the constraints is due to unavailability of bank datasets for research or the required characteristics of the attributes of the data are not available. Numeric data usually provides better performance for machine learning algorithms. Most transaction data however have categorical, or nominal features as well. Moreover, some platforms such as Apache Spark only recognizes numeric data. So, there is a need to use techniques e.g. One-hot encoding (OHE) to transform categorical features to numerical features, however OHE has challenges including the sparseness of transformed data and that the distinct values of an attribute are not always known in advance. Efficient feature engineering can improve the algorithm’s performance but usually requires detailed domain knowledge to identify correct features. Techniques like Ripple Down Rules (RDR) are suitable for fraud detection because of their low maintenance and incremental learning features. However, high classification accuracy on mixed datasets, especially for scalable data is challenging. Evaluation of RDR on distributed platforms is also challenging as it is not available on these platforms. The thesis proposes the following solutions to these challenges: • We developed a technique Highly Correlated Rule Based Uniformly Distribution (HCRUD) to generate highly correlated rule-based uniformly-distributed synthetic data. • We developed a technique One-hot Encoded Extended Compact (OHE-EC) to transform categorical features to numeric features by compacting sparse-data even if all distinct values are unknown. • We developed a technique Feature Engineering and Compact Unified Expressions (FECUE) to improve model efficiency through feature engineering where the domain of the data is not known in advance. • A Unified Expression RDR fraud deduction technique (UE-RDR) for Big data has been proposed and evaluated on the Spark platform. Empirical tests were executed on multi-node Hadoop cluster using well-known classifiers on bank data, synthetic bank datasets and publicly available datasets from UCI repository. These evaluations demonstrated substantial improvements in terms of classification accuracy, ruleset compactness and execution speed.Doctor of Philosoph
Recommended from our members
Effective techniques for handling incomplete data using decision trees
Decision Trees (DTs) have been recognized as one of the most successful formalisms for knowledge representation and reasoning and are currently applied to a variety of data mining or knowledge discovery applications, particularly for classification problems. There are several efficient methods to learn a DT from data. However, these methods are often limited to the assumption that data are complete.
In this thesis, some contributions to the field of machine learning and statistics that solve the problem of extracting DTs for learning and classification tasks from incomplete databases are presented. The methodology underlying the thesis blends together well-established statistical theories with the most advanced techniques for machine learning and automated reasoning with uncertainty.
The first contribution is the extensive simulations which study the impact of missing data on predictive accuracy of existing DTs which can cope with missing values, when missing values are in both the training and test sets or when they are in either of the two sets. All simulations are performed under missing completely at random, missing at random and informatively missing mechanisms and for different missing data patterns and proportions.
The proposal of a simple, novel, yet effective proposed procedure for training and testing using decision trees in the presence of missing data is the next contribution. Original and simple splitting criteria for attribute selection in tree building are put forward. The proposed technique is evaluated and validated in empirical tests over many real world application domains. In this work, the proposed algorithm maintains (sometimes exceeds) the outstanding accuracy of multiple imputation, especially on datasets containing mixed attributes and purely nominal attributes. Also, the proposed algorithm greatly improves in accuracy for IM data. Another major advantage of this method over multiple imputation is the important saving in computational resources due to it simplicity.
The next contribution is the proposal of three versions of simple probabilistic techniques that could be used for classifying incomplete vectors using decision trees based on complete data. The proposed procedure is superficially similar to that of fractional cases but more effective. The experimental results demonstrate that these approaches can achieve comparative quality to sophisticated algorithms like multiple imputation and therefore are applicable to all kinds of datasets.
Finally, novel uses of two proposed ensemble procedures for handling incomplete training and test data are proposed and discussed. The algorithms combine the two best approaches either with resampling (REMIMIA) or without resampling (EMIMIA) of the training data before growing the decision trees. Experiments are used to evaluate and validate the success of the proposed ensemble methods with respect to individual missing data techniques in the form of empirical tests. EMIMIA attains the highest overall level of prediction accuracy
OFSET_mine:an integrated framework for cardiovascular diseases risk prediction based on retinal vascular function
As cardiovascular disease (CVD) represents a spectrum of disorders that often manifestfor the first time through an acute life-threatening event, early identification of seemingly healthy subjects with various degrees of risk is a priority.More recently, traditional scores used for early identification of CVD risk are slowly being replaced by more sensitive biomarkers that assess individual, rather than population risks for CVD. Among these, retinal vascular function, as assessed by the retinal vessel analysis method (RVA), has been proven as an accurate reflection of subclinical CVD in groups of participants without overt disease but with certain inherited or acquired risk factors. Furthermore, in order to correctly detect individual risk at an early stage, specialized machine learning methods and featureselection techniques that can cope with the characteristics of the data need to bedevised.The main contribution of this thesis is an integrated framework, OFSET_mine, that combinesnovel machine learning methods to produce a bespoke solution for Cardiovascular Risk Prediction based on RVA data that is also applicable to other medical datasets with similar characteristics. The three identified essential characteristics are 1) imbalanced dataset,2) high dimensionality and 3) overlapping feature ranges with the possibility of acquiring new samples. The thesis proposes FiltADASYN as an oversampling method that deals with imbalance, DD_Rank as a feature selection method that handles high dimensionality, and GCO_mine as a method for individual-based classification, all three integrated within the OFSET_mine framework.The new oversampling method FiltADASYN extends Adaptive Synthetic Oversampling(ADASYN) with an additional step to filter the generated samples and improve the reliability of the resultant sample set. The feature selection method DD_Rank is based on Restricted Boltzmann Machine (RBM) and ranks features according to their stability and discrimination power. GCO_mine is a lazy learning method based on Graph Cut Optimization (GCO), which considers both the local arrangements and the global structure of the data.OFSET_mine compares favourably to well established composite techniques. Itex hibits high classification performance when applied to a wide range of benchmark medical datasets with variable sample size, dimensionality and imbalance ratios.When applying OFSET _mine on our RVA data, an accuracy of 99.52% is achieved. In addition, using OFSET, the hybrid solution of FiltADASYN and DD_Rank, with Random Forest on our RVA data produces risk group classifications with accuracy 99.68%. This not only reflects the success of the framework but also establishes RVAas a valuable cardiovascular risk predicto
Mapping (Dis-)Information Flow about the MH17 Plane Crash
Digital media enables not only fast sharing of information, but also
disinformation. One prominent case of an event leading to circulation of
disinformation on social media is the MH17 plane crash. Studies analysing the
spread of information about this event on Twitter have focused on small,
manually annotated datasets, or used proxys for data annotation. In this work,
we examine to what extent text classifiers can be used to label data for
subsequent content analysis, in particular we focus on predicting pro-Russian
and pro-Ukrainian Twitter content related to the MH17 plane crash. Even though
we find that a neural classifier improves over a hashtag based baseline,
labeling pro-Russian and pro-Ukrainian content with high precision remains a
challenging problem. We provide an error analysis underlining the difficulty of
the task and identify factors that might help improve classification in future
work. Finally, we show how the classifier can facilitate the annotation task
for human annotators
A Comprehensive Survey on Rare Event Prediction
Rare event prediction involves identifying and forecasting events with a low
probability using machine learning and data analysis. Due to the imbalanced
data distributions, where the frequency of common events vastly outweighs that
of rare events, it requires using specialized methods within each step of the
machine learning pipeline, i.e., from data processing to algorithms to
evaluation protocols. Predicting the occurrences of rare events is important
for real-world applications, such as Industry 4.0, and is an active research
area in statistical and machine learning. This paper comprehensively reviews
the current approaches for rare event prediction along four dimensions: rare
event data, data processing, algorithmic approaches, and evaluation approaches.
Specifically, we consider 73 datasets from different modalities (i.e.,
numerical, image, text, and audio), four major categories of data processing,
five major algorithmic groupings, and two broader evaluation approaches. This
paper aims to identify gaps in the current literature and highlight the
challenges of predicting rare events. It also suggests potential research
directions, which can help guide practitioners and researchers.Comment: 44 page
Fault Prognostics Using Logical Analysis of Data and Non-Parametric Reliability Estimation Methods
RÉSUMÉ : Estimer la durée de vie utile restante (RUL) d’un système qui fonctionne suivant différentes conditions de fonctionnement représente un grand défi pour les chercheurs en maintenance conditionnelle (CBM). En effet, il est difficile de comprendre la relation entre les variables qui représentent ces conditions de fonctionnement et la RUL dans beaucoup de cas en pratique à cause du degré élevé de corrélation entre ces variables et leur dépendance dans le temps. Il est également difficile, voire impossible, pour des experts d’acquérir et accumuler un savoir à propos de systèmes complexes, où l'échec de l'ensemble du système est vu comme le résultat de l'interaction et de la concurrence entre plusieurs modes de défaillance. Cette thèse présente des méthodologies pour le pronostic en CBM basé sur l'apprentissage automatique, et une approche de découverte de connaissances appelée Logical Analysis of Data (LAD). Les méthodologies proposées se composent de plusieurs implémentations de la LAD combinées avec des méthodes non paramétriques d'estimation de fiabilité. L'objectif de ces méthodologies est de prédire la RUL du système surveillé tout en tenant compte de l'analyse des modes de défaillance uniques ou multiples. Deux d’entre elles considèrent un mode de défaillance unique et une autre considère de multiples modes de défaillance. Les deux méthodologies pour le pronostic avec mode unique diffèrent dans la manière de manipuler les données. Les méthodologies de pronostique dans cette recherche doctorale ont été testées et validées sur la base d'un ensemble de tests bien connus. Dans ces tests, les méthodologies ont été comparées à des techniques de pronostic connues; le modèle à risques proportionnels de Cox (PHM), les réseaux de neurones artificiels (ANNs) et les machines à vecteurs de support (SVMs). Deux ensembles de données ont été utilisés pour illustrer la performance des trois méthodologies: l'ensemble de données du turboréacteur à double flux (turbofan) qui est disponible au sein de la base de données pour le développement d'algorithmes de pronostic de la NASA, et un autre ensemble de données obtenu d’une véritable application dans l'industrie. Les résultats de ces comparaisons indiquent que chacune des méthodologies proposées permet de prédire avec précision la RUL du système considéré. Cette recherche doctorale conclut que l’approche utilisant la LAD possède d’importants mérites et avantages qui pourraient être bénéfiques au domaine du pronostic en CBM. Elle est capable de gérer les données en CBM qui sont corrélées et variantes dans le temps. Son autre avantage et qu’elle génère un savoir interprétable qui est bénéfique au personnel de maintenance.----------ABSTRACT : Estimating the remaining useful life (RUL) for a system working under different operating conditions represents a big challenge to the researchers in the condition-based maintenance (CBM) domain. The reason is that the relationship between the covariates that represent those operating conditions and the RUL is not fully understood in many practical cases, due to the high degree of correlation between such covariates, and their dependence on time. It is also difficult or even impossible for the experts to acquire and accumulate the knowledge from a complex system, where the failure of the system is regarded as the result of interaction and competition between several failure modes. This thesis presents systematic CBM prognostic methodologies based on a pattern-based machine learning and knowledge discovery approach called Logical Analysis of Data (LAD). The proposed methodologies comprise different implementations of the LAD approach combined with non-parametric reliability estimation methods. The objective of these methodologies is to predict the RUL of the monitored system while considering the analysis of single or multiple failure modes. Three different methodologies are presented; two deal with single failure mode and one deals with multiple failure modes. The two methodologies for single mode prognostics differ in the way of representing the data. The prognostic methodologies in this doctoral research have been tested and validated based on a set of widely known tests. In these tests, the methodologies were compared to well-known prognostic techniques; the proportional hazards model (PHM), artificial neural networks (ANNs) and support vector machines (SVMs). Two datasets were used to illustrate the performance of the three methodologies: the turbofan engine dataset that is available at NASA prognostic data repository, and another dataset collected from a real application in the industry. The results of these comparisons indicate that each of the proposed methodologies provides an accurate prediction for the RUL of the monitored system. This doctoral research concludes that the LAD approach has attractive merits and advantages that add benefits to the field of prognostics. It is capable of dealing with the CBM data that are correlated and time-varying. Another advantage is its generation of an interpretable knowledge that is beneficial to the maintenance personnel
- …