152 research outputs found

    ALOJA: A framework for benchmarking and predictive analytics in Hadoop deployments

    Get PDF
    This article presents the ALOJA project and its analytics tools, which leverages machine learning to interpret Big Data benchmark performance data and tuning. ALOJA is part of a long-term collaboration between BSC and Microsoft to automate the characterization of cost-effectiveness on Big Data deployments, currently focusing on Hadoop. Hadoop presents a complex run-time environment, where costs and performance depend on a large number of configuration choices. The ALOJA project has created an open, vendor-neutral repository, featuring over 40,000 Hadoop job executions and their performance details. The repository is accompanied by a test-bed and tools to deploy and evaluate the cost-effectiveness of different hardware configurations, parameters and Cloud services. Despite early success within ALOJA, a comprehensive study requires automation of modeling procedures to allow an analysis of large and resource-constrained search spaces. The predictive analytics extension, ALOJA-ML, provides an automated system allowing knowledge discovery by modeling environments from observed executions. The resulting models can forecast execution behaviors, predicting execution times for new configurations and hardware choices. That also enables model-based anomaly detection or efficient benchmark guidance by prioritizing executions. In addition, the community can benefit from ALOJA data-sets and framework to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR1051.Peer ReviewedPostprint (published version

    Real-time big data processing for anomaly detection : a survey

    Get PDF
    The advent of connected devices and omnipresence of Internet have paved way for intruders to attack networks, which leads to cyber-attack, financial loss, information theft in healthcare, and cyber war. Hence, network security analytics has become an important area of concern and has gained intensive attention among researchers, off late, specifically in the domain of anomaly detection in network, which is considered crucial for network security. However, preliminary investigations have revealed that the existing approaches to detect anomalies in network are not effective enough, particularly to detect them in real time. The reason for the inefficacy of current approaches is mainly due the amassment of massive volumes of data though the connected devices. Therefore, it is crucial to propose a framework that effectively handles real time big data processing and detect anomalies in networks. In this regard, this paper attempts to address the issue of detecting anomalies in real time. Respectively, this paper has surveyed the state-of-the-art real-time big data processing technologies related to anomaly detection and the vital characteristics of associated machine learning algorithms. This paper begins with the explanation of essential contexts and taxonomy of real-time big data processing, anomalous detection, and machine learning algorithms, followed by the review of big data processing technologies. Finally, the identified research challenges of real-time big data processing in anomaly detection are discussed. © 2018 Elsevier Lt

    Open Source Analytics Solutions for Maintenance

    Get PDF
    The current paper reviews existent data mining and big data analytics open source solutions. In the area of industrial maintenance engineering, the algorithms, which are part of these solutions, have started to be studied and introduced into the domain. In addition, the interest in big data and analytics have increased in several areas because of the increased amount of data produced as well as a remarkable speed attained and its variation, i.e. the so-called 3 V’s (Volume, Velocity, and Variety). The companies and organizations have seen the need to optimize their decision-making processes with the support of data mining and big data analytics. The development of this kind of solutions might be a long process and for some companies something that is not within their reach for many reasons. It is, therefore, important to understand the characteristics of the open source solutions. Consequently, the authors use a framework to organize their findings. Thus, the framework used is called the knowledge discovery in databases (KDD) process for extracting useful knowledge from volumes of data. The authors suggest a modified KDD framework to be able to understand if the respective data mining/big data solutions are adequate and suitable to use in the domain of industrial maintenance engineering

    Towards Large-Scale, Heterogeneous Anomaly Detection Systems in Industrial Networks: A Survey of Current Trends

    Get PDF
    Industrial Networks (INs) are widespread environments where heterogeneous devices collaborate to control and monitor physical processes. Some of the controlled processes belong to Critical Infrastructures (CIs), and, as such, IN protection is an active research field. Among different types of security solutions, IN Anomaly Detection Systems (ADSs) have received wide attention from the scientific community.While INs have grown in size and in complexity, requiring the development of novel, Big Data solutions for data processing, IN ADSs have not evolved at the same pace. In parallel, the development of BigData frameworks such asHadoop or Spark has led the way for applying Big Data Analytics to the field of cyber-security,mainly focusing on the Information Technology (IT) domain. However, due to the particularities of INs, it is not feasible to directly apply IT security mechanisms in INs, as IN ADSs face unique characteristics. In this work we introduce three main contributions. First, we survey the area of Big Data ADSs that could be applicable to INs and compare the surveyed works. Second, we develop a novel taxonomy to classify existing INbased ADSs. And, finally, we present a discussion of open problems in the field of Big Data ADSs for INs that can lead to further development

    Latest Trending Techniques and ML Models for Big Data Analytics - A Review

    Get PDF
    As businesses increasingly rely on Big Data for decision-making, understanding cutting-edge methods is crucial. This review synthesizes recent advancements, highlighting the most impactful techniques and ML model algorithms are essential for analyzing and drawing insights from vast amounts of data. This research paper explores the latest techniques and ML algorithms used in big data analytics, including deep learning, neural networks, and decision trees. The challenges of big data analytics are discussed along with how these techniques can help overcome them. Real-world examples of how these techniques have been used in big data analytics are also provided

    Securing cloud-enabled smart cities by detecting intrusion using spark-based stacking ensemble of machine learning algorithms

    Get PDF
    With the use of cloud computing, which provides the infrastructure necessary for the efficient delivery of smart city services to every citizen over the internet, intelligent systems may be readily integrated into smart cities and communicate with one another. Any smart system at home, in a car, or in the workplace can be remotely controlled and directed by the individual at any time. Continuous cloud service availability is becoming a critical subscriber requirement within smart cities. However, these cost-cutting measures and service improvements will make smart city cloud networks more vulnerable and at risk. The primary function of Intrusion Detection Systems (IDS) has gotten increasingly challenging due to the enormous proliferation of data created in cloud networks of smart cities. To alleviate these concerns, we provide a framework for automatic, reliable, and uninterrupted cloud availability of services for the network data security of intelligent connected devices. This framework enables IDS to defend against security threats and to provide services that meet the users' Quality of Service (QoS) expectations. This study's intrusion detection solution for cloud network data from smart cities employed Spark and Waikato Environment for Knowledge Analysis (WEKA). WEKA and Spark are linked and made scalable and distributed. The Hadoop Distributed File System (HDFS) storage advantages are combined with WEKA's Knowledge flow for processing cloud network data for smart cities. Utilizing HDFS components, WEKA's machine learning algorithms receive cloud network data from smart cities. This research utilizes the wrapper-based Feature Selection (FS) approach for IDS, employing both the Pigeon Inspired Optimizer (PIO) and the Particle Swarm Optimization (PSO). For classifying the cloud network traffic of smart cities, the tree-based Stacking Ensemble Method (SEM) of J48, Random Forest (RF), and eXtreme Gradient Boosting (XGBoost) are applied. Performance evaluations of our system were conducted using the UNSW-NB15 and NSL-KDD datasets. Our technique is superior to previous works in terms of sensitivity, specificity, precision, false positive rate (FPR), accuracy, F1 Score, and Matthews correlation coefficient (MCC)

    Fraud detection for online banking for scalable and distributed data

    Get PDF
    Online fraud causes billions of dollars in losses for banks. Therefore, online banking fraud detection is an important field of study. However, there are many challenges in conducting research in fraud detection. One of the constraints is due to unavailability of bank datasets for research or the required characteristics of the attributes of the data are not available. Numeric data usually provides better performance for machine learning algorithms. Most transaction data however have categorical, or nominal features as well. Moreover, some platforms such as Apache Spark only recognizes numeric data. So, there is a need to use techniques e.g. One-hot encoding (OHE) to transform categorical features to numerical features, however OHE has challenges including the sparseness of transformed data and that the distinct values of an attribute are not always known in advance. Efficient feature engineering can improve the algorithm’s performance but usually requires detailed domain knowledge to identify correct features. Techniques like Ripple Down Rules (RDR) are suitable for fraud detection because of their low maintenance and incremental learning features. However, high classification accuracy on mixed datasets, especially for scalable data is challenging. Evaluation of RDR on distributed platforms is also challenging as it is not available on these platforms. The thesis proposes the following solutions to these challenges: • We developed a technique Highly Correlated Rule Based Uniformly Distribution (HCRUD) to generate highly correlated rule-based uniformly-distributed synthetic data. • We developed a technique One-hot Encoded Extended Compact (OHE-EC) to transform categorical features to numeric features by compacting sparse-data even if all distinct values are unknown. • We developed a technique Feature Engineering and Compact Unified Expressions (FECUE) to improve model efficiency through feature engineering where the domain of the data is not known in advance. • A Unified Expression RDR fraud deduction technique (UE-RDR) for Big data has been proposed and evaluated on the Spark platform. Empirical tests were executed on multi-node Hadoop cluster using well-known classifiers on bank data, synthetic bank datasets and publicly available datasets from UCI repository. These evaluations demonstrated substantial improvements in terms of classification accuracy, ruleset compactness and execution speed.Doctor of Philosoph

    ALOJA-ML: a framework for automating characterization and knowledge discovery in Hadoop deployments

    Get PDF
    This article presents ALOJA-Machine Learning (ALOJA-ML) an extension to the ALOJA project that uses machine learning techniques to interpret Hadoop benchmark performance data and performance tuning; here we detail the approach, efficacy of the model and initial results. The ALOJA-ML project is the latest phase of a long-term collaboration between BSC and Microsoft, to automate the characterization of cost-effectiveness on Big Data deployments, focusing on Hadoop. Hadoop presents a complex execution environment, where costs and performance depends on a large number of software (SW) configurations and on multiple hardware (HW) deployment choices. Recently the ALOJA project presented an open, vendor-neutral repository, featuring over 16.000 Hadoop executions. These results are accompanied by a test bed and tools to deploy and evaluate the cost-effectiveness of the different hardware configurations, parameter tunings, and Cloud services. Despite early success within ALOJA from expert-guided benchmarking, it became clear that a genuinely comprehensive study requires automation of modeling procedures to allow a systematic analysis of large and resource-constrained search spaces. ALOJA-ML provides such an automated system allowing knowledge discovery by modeling Hadoop executions from observed benchmarks across a broad set of configuration parameters. The resulting empirically-derived performance models can be used to forecast execution behavior of various workloads; they allow a-priori prediction of the execution times for new configurations and HW choices and they offer a route to model-based anomaly detection. In addition, these models can guide the benchmarking exploration efficiently, by automatically prioritizing candidate future benchmark tests. Insights from ALOJA-ML's models can be used to reduce the operational time on clusters, speed-up the data acquisition and knowledge discovery process, and importantly, reduce running costs. In addition to learning from the methodology presented in this work, the community can benefit in general from ALOJA data-sets, framework, and derived insights to improve the design and deployment of Big Data applications.This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 re- search and innovation programme (grant agreement No 639595). This work is partially supported by the Ministry of Economy of Spain under contracts TIN2012-34557 and 2014SGR105Peer ReviewedPostprint (published version

    A Survey on Vertical and Horizontal Scaling Platforms for Big Data Analytics

    Get PDF
    There is no doubt that we are entering the era of big data. The challenge is on how to store, search, and analyze the huge amount of data that is being generated per second. One of the main obstacles to the big data researchers is how to find the appropriate big data analysis platform. The basic aim of this work is to present a complete investigation of all the available platforms for big data analysis in terms of vertical and horizontal scaling, and its compatible framework and applications in detail. Finally, this article will outline some research trends and other open issues in big data analytic
    • …
    corecore