1,810 research outputs found

    Novel Intrusion Detection using Probabilistic Neural Network and Adaptive Boosting

    Full text link
    This article applies Machine Learning techniques to solve Intrusion Detection problems within computer networks. Due to complex and dynamic nature of computer networks and hacking techniques, detecting malicious activities remains a challenging task for security experts, that is, currently available defense systems suffer from low detection capability and high number of false alarms. To overcome such performance limitations, we propose a novel Machine Learning algorithm, namely Boosted Subspace Probabilistic Neural Network (BSPNN), which integrates an adaptive boosting technique and a semi parametric neural network to obtain good tradeoff between accuracy and generality. As the result, learning bias and generalization variance can be significantly minimized. Substantial experiments on KDD 99 intrusion benchmark indicate that our model outperforms other state of the art learning algorithms, with significantly improved detection accuracy, minimal false alarms and relatively small computational complexity.Comment: 9 pages IEEE format, International Journal of Computer Science and Information Security, IJCSIS 2009, ISSN 1947 5500, Impact Factor 0.423, http://sites.google.com/site/ijcsis

    Prediction of high-performance concrete compressive strength through a comparison of machine learning techniques

    Get PDF
    Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceHigh-performance concrete (HPC) is a highly complex composite material whose characteristics are extremely difficult to model. One of those characteristics is the concrete compressive strength, a nonlinear function of the same ingredients that compose HPC: cement, fly ash, blast furnace slag, water, superplasticizer, age, and coarse and fine aggregates. Research has shown time and time again that concrete strength is not determined just by the water-to-cement ratio, which was for years the go to metric. In addition, traditional methods that attempt to model HPC, such as regression analysis, do not provide sufficient prediction power due to nonlinear proprieties of the mixture. Therefore, this study attempts to optimize the prediction and modeling of the compressive strength of HPC by analyzing seven different machine learning (ML) algorithms: three regularization algorithms (Lasso, Ridge and Elastic Net), three ensemble algorithms (Random Forest, Gradient Boost and AdaBoost), and Artificial Neural Networks. All techniques were built and tested with a dataset composed of data from 17 different concrete strength test laboratories, under the same experimental conditions, which enabled a fair comparison amongst them and between different previous studies in the field. Feature importance analysis and outlier analysis were also performed, and all models were subject to a Wilcoxon Signed-Ranks Test to ensure statistically significant results. The final results show that the more complex ML algorithms provided greater accuracy than the regularization techniques, with Gradient Boost being the superior model amongst them, providing more accurate predictions than the sate-of-the-art. Better results were achieved using all variables and without removing outlier observations

    Machine learning approach for credit score analysis : a case study of predicting mortgage loan defaults

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the degree of Statistics and Information Management specialized in Risk Analysis and ManagementTo effectively manage credit score analysis, financial institutions instigated techniques and models that are mainly designed for the purpose of improving the process assessing creditworthiness during the credit evaluation process. The foremost objective is to discriminate their clients – borrowers – to fall either in the non-defaulter group, that is more likely to pay their financial obligations, or the defaulter one which has a higher probability of failing to pay their debts. In this paper, we devote to use machine learning models in the prediction of mortgage defaults. This study employs various single classification machine learning methodologies including Logistic Regression, Classification and Regression Trees, Random Forest, K-Nearest Neighbors, and Support Vector Machine. To further improve the predictive power, a meta-algorithm ensemble approach – stacking – will be introduced to combine the outputs – probabilities – of the afore mentioned methods. The sample for this study is solely based on the publicly provided dataset by Freddie Mac. By modelling this approach, we achieve an improvement in the model predictability performance. We then compare the performance of each model, and the meta-learner, by plotting the ROC Curve and computing the AUC rate. This study is an extension of various preceding studies that used different techniques to further enhance the model predictivity. Finally, our results are compared with work from different authors.Para gerir com eficácia a análise de risco de crédito, as instituições financeiras desenvolveram técnicas e modelos que foram projetados principalmente para melhorar o processo de avaliação da qualidade de crédito durante o processo de avaliação de crédito. O objetivo final é classifica os seus clientes - tomadores de empréstimos - entre aqueles que tem maior probabilidade de pagar suas obrigações financeiras, e os potenciais incumpridores que têm maior probabilidade de entrar em default. Neste artigo, nos dedicamos a usar modelos de aprendizado de máquina na previsão de defaults de hipoteca. Este estudo emprega várias metodologias de aprendizado de máquina de classificação única, incluindo Regressão Logística, Classification and Regression Trees, Random Forest, K-Nearest Neighbors, and Support Vector Machine. Para melhorar ainda mais o poder preditivo, a abordagem do conjunto de meta-algoritmos - stacking - será introduzida para combinar as saídas - probabilidades - dos métodos acima mencionados. A amostra deste estudo é baseada exclusivamente no conjunto de dados fornecido publicamente pela Freddie Mac. Ao modelar essa abordagem, alcançamos uma melhoria no desempenho do modelo de previsibilidade. Em seguida, comparamos o desempenho de cada modelo e o meta-aprendiz, plotando a Curva ROC e calculando a taxa de AUC. Este estudo é uma extensão de vários estudos anteriores que usaram diferentes técnicas para melhorar ainda mais o modelo preditivo. Finalmente, nossos resultados são comparados com trabalhos de diferentes autores

    Methods to Improve the Prediction Accuracy and Performance of Ensemble Models

    Get PDF
    The application of ensemble predictive models has been an important research area in predicting medical diagnostics, engineering diagnostics, and other related smart devices and related technologies. Most of the current predictive models are complex and not reliable despite numerous efforts in the past by the research community. The performance accuracy of the predictive models have not always been realised due to many factors such as complexity and class imbalance. Therefore there is a need to improve the predictive accuracy of current ensemble models and to enhance their applications and reliability and non-visual predictive tools. The research work presented in this thesis has adopted a pragmatic phased approach to propose and develop new ensemble models using multiple methods and validated the methods through rigorous testing and implementation in different phases. The first phase comprises of empirical investigations on standalone and ensemble algorithms that were carried out to ascertain their performance effects on complexity and simplicity of the classifiers. The second phase comprises of an improved ensemble model based on the integration of Extended Kalman Filter (EKF), Radial Basis Function Network (RBFN) and AdaBoost algorithms. The third phase comprises of an extended model based on early stop concepts, AdaBoost algorithm, and statistical performance of the training samples to minimize overfitting performance of the proposed model. The fourth phase comprises of an enhanced analytical multivariate logistic regression predictive model developed to minimize the complexity and improve prediction accuracy of logistic regression model. To facilitate the practical application of the proposed models; an ensemble non-invasive analytical tool is proposed and developed. The tool links the gap between theoretical concepts and practical application of theories to predict breast cancer survivability. The empirical findings suggested that: (1) increasing the complexity and topology of algorithms does not necessarily lead to a better algorithmic performance, (2) boosting by resampling performs slightly better than boosting by reweighting, (3) the prediction accuracy of the proposed ensemble EKF-RBFN-AdaBoost model performed better than several established ensemble models, (4) the proposed early stopped model converges faster and minimizes overfitting better compare with other models, (5) the proposed multivariate logistic regression concept minimizes the complexity models (6) the performance of the proposed analytical non-invasive tool performed comparatively better than many of the benchmark analytical tools used in predicting breast cancers and diabetics ailments. The research contributions to ensemble practice are: (1) the integration and development of EKF, RBFN and AdaBoost algorithms as an ensemble model, (2) the development and validation of ensemble model based on early stop concepts, AdaBoost, and statistical concepts of the training samples, (3) the development and validation of predictive logistic regression model based on breast cancer, and (4) the development and validation of a non-invasive breast cancer analytic tools based on the proposed and developed predictive models in this thesis. To validate prediction accuracy of ensemble models, in this thesis the proposed models were applied in modelling breast cancer survivability and diabetics’ diagnostic tasks. In comparison with other established models the simulation results of the models showed improved predictive accuracy. The research outlines the benefits of the proposed models, whilst proposes new directions for future work that could further extend and improve the proposed models discussed in this thesis

    Super learner implementation in corrosion rate prediction

    Get PDF
    This thesis proposes a new machine learning model for predicting the corrosion rate of 3C steel in seawater. The corrosion rate of a material depends not just on the nature of the material but also on the material\u27s environmental conditions. The proposed machine learning model comes with a selection framework based on the hyperparameter optimization method and a performance evaluation metric to determine the models that qualify for further implementation in the proposed models’ ensembles architecture. The major aim of the selection framework is to select the least number of models that will fit efficiently (while already hyperparameter-optimized) into the architecture of the proposed model. Subsequently, the proposed predictive model is fitted on some portion of a dataset generated from an experiment on corrosion rate in five different seawater conditions. The remaining portion of this dataset is implemented in estimating the corrosion rate. Furthermore, the performance of the proposed models’ predictions was evaluated using three major performance evaluation metrics. These metrics were also used to evaluate the performance of two hyperparameter-optimized models (Smart Firefly Algorithm and Least Squares Support Vector Regression (SFA-LSSVR) and Support Vector Regression integrating Leave Out One Cross-Validation (SVR-LOOCV)) to facilitate their comparison with the proposed predictive model and its constituent models. The test results show that the proposed model performs slightly below the SFA-LSSVR model and above the SVR-LOOCV model by an RMSE score difference of 0.305 and RMSE score of 0.792. Despite its poor performance against the SFA-LSSVR model, the super learner model outperforms both hyperparameter-optimized models in the utilization of memory and computation time (graphically presented in this thesis)

    BagStack Classification for Data Imbalance Problems with Application to Defect Detection and Labeling in Semiconductor Units

    Get PDF
    abstract: Despite the fact that machine learning supports the development of computer vision applications by shortening the development cycle, finding a general learning algorithm that solves a wide range of applications is still bounded by the ”no free lunch theorem”. The search for the right algorithm to solve a specific problem is driven by the problem itself, the data availability and many other requirements. Automated visual inspection (AVI) systems represent a major part of these challenging computer vision applications. They are gaining growing interest in the manufacturing industry to detect defective products and keep these from reaching customers. The process of defect detection and classification in semiconductor units is challenging due to different acceptable variations that the manufacturing process introduces. Other variations are also typically introduced when using optical inspection systems due to changes in lighting conditions and misalignment of the imaged units, which makes the defect detection process more challenging. In this thesis, a BagStack classification framework is proposed, which makes use of stacking and bagging concepts to handle both variance and bias errors. The classifier is designed to handle the data imbalance and overfitting problems by adaptively transforming the multi-class classification problem into multiple binary classification problems, applying a bagging approach to train a set of base learners for each specific problem, adaptively specifying the number of base learners assigned to each problem, adaptively specifying the number of samples to use from each class, applying a novel data-imbalance aware cross-validation technique to generate the meta-data while taking into account the data imbalance problem at the meta-data level and, finally, using a multi-response random forest regression classifier as a meta-classifier. The BagStack classifier makes use of multiple features to solve the defect classification problem. In order to detect defects, a locally adaptive statistical background modeling is proposed. The proposed BagStack classifier outperforms state-of-the-art image classification techniques on our dataset in terms of overall classification accuracy and average per-class classification accuracy. The proposed detection method achieves high performance on the considered dataset in terms of recall and precision.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201

    Online Machine Learning Algorithms Review and Comparison in Healthcare

    Get PDF
    Currently, the healthcare industry uses Big Data for essential patient care information. Electronic Health Records (EHR) store massive data and are continuously updated with information such as laboratory results, medication, and clinical events. There are various methods by which healthcare data is generated and collected, including databases, healthcare websites, mobile applications, wearable technologies, and sensors. The continuous flow of data will improve healthcare service, medical diagnostic research and, ultimately, patient care. Thus, it is important to implement advanced data analysis techniques to obtain more precise prediction results.Machine Learning (ML) has acquired an important place in Big Healthcare Data (BHD). ML has the capability to run predictive analysis, detect patterns or red flags, and connect dots to enhance personalized treatment plans. Because predictive models have dependent and independent variables, ML algorithms perform mathematical calculations to find the best suitable mathematical equations to predict dependent variables using a given set of independent variables. These model performances depend on datasets and response, or dependent, variable types such as binary or multi-class, supervised or unsupervised.The current research analyzed incremental, or streaming or online, algorithm performance with offline or batch learning (these terms are used interchangeably) using performance measures such as accuracy, model complexity, and time consumption. Batch learning algorithms are provided with the specific dataset, which always constrains the size of the dataset depending on memory consumption. In the case of incremental algorithms, data arrive sequentially, which is determined by hyperparameter optimization such as chunk size, tree split, or hoeffding bond. The model complexity of an incremental learning algorithm is based on a number of parameters, which in turn determine memory consumption

    A Hybrid Genetic Algorithm-Random Forest Regression Method for Optimum Driver Selection in Online Food Delivery

    Get PDF
    The online food delivery trend has become rapid due to the COVID-19 incident, which limited mobility, while the broader challenge in the online food delivery system is maximizing quality of service (QoS). However, studies show that driver selection and delivery time are important in customer satisfaction. The solution is our research aim, which is the selection of optimal drivers for online food delivery using random forest regression and the genetic algorithm (GA) method. Our research contribution is a novel approach to minimizing delivery time in online food delivery by combining a random forest regression model and genetic algorithms. We compare random forest regression with three other state-of-the-art regression models: linear regression, k-nearest neighbor (KNN), and adaptive boosting (AdaBoost) regression. We compare the four models with metrics including , mean squared error (MSE), root mean squared error (RMSE), mean total error (MAE), and mean absolute percentage error (MAPE). We use the optimum model as the fitness function in GA. The test results show that random forest performs better than linear, KNN, and AdaBoost regression, with an , RMSE, and MAE value of 0.98, 54.3, and 11, respectively. We leverage the optimum random forest regression model as the GA fitness function. The best efficiency is reducing the delivery time from 54 to 15 minutes, achieved through rigorous testing on various cases. In addition, by completing this research, we also achieve some practical implications, such as an increase in customer satisfaction, a reduction in cost, and a paramount finding in the field of data-driven decision-making. The first key finding is an optimum driver selection model in random forest regression, while the second is an optimum driver selection model in GA
    • …
    corecore