566 research outputs found

    Spatiotemporal anomaly detection: streaming architecture and algorithms

    Get PDF
    Includes bibliographical references.2020 Summer.Anomaly detection is the science of identifying one or more rare or unexplainable samples or events in a dataset or data stream. The field of anomaly detection has been extensively studied by mathematicians, statisticians, economists, engineers, and computer scientists. One open research question remains the design of distributed cloud-based architectures and algorithms that can accurately identify anomalies in previously unseen, unlabeled streaming, multivariate spatiotemporal data. With streaming data, time is of the essence, and insights are perishable. Real-world streaming spatiotemporal data originate from many sources, including mobile phones, supervisory control and data acquisition enabled (SCADA) devices, the internet-of-things (IoT), distributed sensor networks, and social media. Baseline experiments are performed on four (4) non-streaming, static anomaly detection multivariate datasets using unsupervised offline traditional machine learning (TML), and unsupervised neural network techniques. Multiple architectures, including autoencoders, generative adversarial networks, convolutional networks, and recurrent networks, are adapted for experimentation. Extensive experimentation demonstrates that neural networks produce superior detection accuracy over TML techniques. These same neural network architectures can be extended to process unlabeled spatiotemporal streaming using online learning. Space and time relationships are further exploited to provide additional insights and increased anomaly detection accuracy. A novel domain-independent architecture and set of algorithms called the Spatiotemporal Anomaly Detection Environment (STADE) is formulated. STADE is based on federated learning architecture. STADE streaming algorithms are based on a geographically unique, persistently executing neural networks using online stochastic gradient descent (SGD). STADE is designed to be pluggable, meaning that alternative algorithms may be substituted or combined to form an ensemble. STADE incorporates a Stream Anomaly Detector (SAD) and a Federated Anomaly Detector (FAD). The SAD executes at multiple locations on streaming data, while the FAD executes at a single server and identifies global patterns and relationships among the site anomalies. Each STADE site streams anomaly scores to the centralized FAD server for further spatiotemporal dependency analysis and logging. The FAD is based on recent advances in DNN-based federated learning. A STADE testbed is implemented to facilitate globally distributed experimentation using low-cost, commercial cloud infrastructure provided by Microsoft™. STADE testbed sites are situated in the cloud within each continent: Africa, Asia, Australia, Europe, North America, and South America. Communication occurs over the commercial internet. Three STADE case studies are investigated. The first case study processes commercial air traffic flows, the second case study processes global earthquake measurements, and the third case study processes social media (i.e., Twitter™) feeds. These case studies confirm that STADE is a viable architecture for the near real-time identification of anomalies in streaming data originating from (possibly) computationally disadvantaged, geographically dispersed sites. Moreover, the addition of the FAD provides enhanced anomaly detection capability. Since STADE is domain-independent, these findings can be easily extended to additional application domains and use cases

    A Hybridized- Logistic Regression and Deep Learning-based Approaches for Precise Anomaly Detection in Cloud

    Get PDF
    Anomaly Detection plays a pivot role in determining the abnormal behaviour in the cloud domain. The objective of the manuscript is to present two approaches for Precise Anomaly Detection Approaches by hybridizing RBM with LR and SVM models. The various phases in the present approach are (a) Data collection (b) Pre-processing and normalization; OneHot Encoder for converting categorical values to numerical values followed by encoding the binary features through normalization (c) training the data (d) Building the Feedforward Deep Belief Network (EDBN) using hybridizing Restricted Boltzmann Machine (RBM) with Logistic Regression (LR) and Support Vector Machine (SVM); In the first approach, RBM model is trained through unsupervised pre-training followed by fine-tuning using LR model. In the later approach, RBM model is trained through unsupervised pre-training followed by fine-tuning using SVM model; both the approaches adopt unsupervised pre-training followed by supervised-fine-tuning operations (e) Model Evaluation using the significant parameters such as Precision, Recall, Accuracy, F1-score and Confusion Matrix. The experimental evaluations concluded the effective anomaly detection techniques by integrating the RBM with LR and SVM for capturing the intricate patterns and complex relationships among the data. The proposed approaches paves a path to improved anomaly detection technique, thereby enhancing the security features and anomaly monitoring systems across distinct domains

    ExaMon-X: a Predictive Maintenance Framework for Automatic Monitoring in Industrial IoT Systems

    Get PDF
    In recent years, the Industrial Internet of Things (IIoT) has led to significant steps forward in many industries, thanks to the exploitation of several technologies, ranging from Big Data processing to Artificial Intelligence (AI). Among the various IIoT scenarios, large-scale data centers can reap significant benefits from adopting Big Data analytics and AI-boosted approaches since these technologies can allow effective predictive maintenance. However, most of the off-the-shelf currently available solutions are not ideally suited to the HPC context, e.g., they do not sufficiently take into account the very heterogeneous data sources and the privacy issues which hinder the adoption of the cloud solution, or they do not fully exploit the computing capabilities available in loco in a supercomputing facility. In this paper, we tackle this issue, and we propose an IIoT holistic and vertical framework for predictive maintenance in supercomputers. The framework is based on a big lightweight data monitoring infrastructure, specialized databases suited for heterogeneous data, and a set of high-level AI-based functionalities tailored to HPC actors’ specific needs. We present the deployment and assess the usage of this framework in several in-production HPC systems

    Forecasting for Network Management with Joint Statistical Modelling and Machine Learning

    Get PDF
    Forecasting is a task of ever increasing importance for the operation of mobile networks, where it supports anticipa tory decisions by network intelligence and enables emerging zero touch service and network management models. While current trends in forecasting for anticipatory networking lean towards the systematic adoption of models that are purely based on deep learning approaches, we pave the way for a different strategy to the design of predictors for mobile network environments. Specifically, following recent advances in time series prediction, we consider a hybrid approach that blends statistical modelling and machine learning by means of a joint training process of the two methods. By tailoring this mixed forecasting engine to the specific requirements of network traffic demands, we develop a Thresholded Exponential Smoothing and Recurrent Neural Network (TES-RNN) model. We experiment with TES RNN in two practical network management use cases, i.e., (i) anticipatory allocation of network resources, and (ii) mobile traffic anomaly prediction. Results obtained with extensive traffic workloads collected in an operational mobile network show that TES-RNN can yield substantial performance gains over current state-of-the-art predictors in both applications consideredThis work is partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement no.101017109 DAEMON. This work is partially supported by the Spanish Ministry of Economic Affairs and Digital Transformation and the European Union-NextGenerationEU through the UNICO 5G I+D 6GCLARION-OR and AEON-ZERO. The authors would like to thank Dario Bega for his contribution to developing the forecasting use case I, and Slawek Smyl for his feedback on the baseline ES-RNN model

    Automatic Building of a Powerful IDS for The Cloud Based on Deep Neural Network by Using a Novel Combination of Simulated Annealing Algorithm and Improved Self- Adaptive Genetic Algorithm

    Get PDF
    Cloud computing (CC) is the fastest-growing data hosting and computational technology that stands today as a satisfactory answer to the problem of data storage and computing. Thereby, most organizations are now migratingtheir services into the cloud due to its appealing features and its tangible advantages. Nevertheless, providing privacy and security to protect cloud assets and resources still a very challenging issue. To address the aboveissues, we propose a smart approach to construct automatically an efficient and effective anomaly network IDS based on Deep Neural Network, by using a novel hybrid optimization framework “ISAGASAA”. ISAGASAA framework combines our new self-adaptive heuristic search algorithm called “Improved Self-Adaptive Genetic Algorithm” (ISAGA) and Simulated Annealing Algorithm (SAA). Our approach consists of using ISAGASAA with the aim of seeking the optimal or near optimal combination of most pertinent values of the parametersincluded in building of DNN based IDS or impacting its performance, which guarantee high detection rate, high accuracy and low false alarm rate. The experimental results turn out the capability of our IDS to uncover intrusionswith high detection accuracy and low false alarm rate, and demonstrate its superiority in comparison with stateof-the-art methods

    New Anomaly Network Intrusion Detection System in Cloud Environment Based on Optimized Back Propagation Neural Network Using Improved Genetic Algorithm

    Get PDF
    Cloud computing is distributed architecture, providing computing facilities and storage resource as a service over an open environment (Internet), this lead to different matters related to the security and privacy in cloud computing. Thus, defending network accessible Cloud resources and services from various threats and attacks is of great concern. To address this issue, it is essential to create an efficient and effective Network Intrusion System (NIDS) to detect both outsider and insider intruders with high detection precision in the cloud environment. NIDS has become popular as an important component of the network security infrastructure, which detects malicious activities by monitoring network traffic. In this work, we propose to optimize a very popular soft computing tool widely used for intrusion detection namely, Back Propagation Neural Network (BPNN) using an Improved Genetic Algorithm (IGA). Genetic Algorithm (GA) is improved through optimization strategies, namely Parallel Processing and Fitness Value Hashing, which reduce execution time, convergence time and save processing power. Since,  Learning rate and Momentum term are among the most relevant parameters that impact the performance of BPNN classifier, we have employed IGA to find the optimal or near-optimal values of these two parameters which ensure high detection rate, high accuracy and low false alarm rate. The CloudSim simulator 4.0 and DARPA’s KDD cup datasets 1999 are used for simulation. From the detailed performance analysis, it is clear that the proposed system called “ANIDS BPNN-IGA” (Anomaly NIDS based on BPNN and IGA) outperforms several state-of-art methods and it is more suitable for network anomaly detection

    Enhanced non-parametric sequence learning scheme for internet of things sensory data in cloud infrastructure

    Get PDF
    The Internet of Things (IoT) Cloud is an emerging technology that enables machine-to-machine, human-to-machine and human-to-human interaction through the Internet. IoT sensor devices tend to generate sensory data known for their dynamic and heterogeneous nature. Hence, it makes it elusive to be managed by the sensor devices due to their limited computation power and storage space. However, the Cloud Infrastructure as a Service (IaaS) leverages the limitations of the IoT devices by making its computation power and storage resources available to execute IoT sensory data. In IoT-Cloud IaaS, resource allocation is the process of distributing optimal resources to execute data request tasks that comprise data filtering operations. Recently, machine learning, non-heuristics, multi-objective and hybrid algorithms have been applied for efficient resource allocation to execute IoT sensory data filtering request tasks in IoT-enabled Cloud IaaS. However, the filtering task is still prone to some challenges. These challenges include global search entrapment of event and error outlier detection as the dimension of the dataset increases in size, the inability of missing data recovery for effective redundant data elimination and local search entrapment that leads to unbalanced workloads on available resources required for task execution. In this thesis, the enhancement of Non-Parametric Sequence Learning (NPSL), Perceptually Important Point (PIP) and Efficient Energy Resource Ranking- Virtual Machine Selection (ERVS) algorithms were proposed. The Non-Parametric Sequence-based Agglomerative Gaussian Mixture Model (NPSAGMM) technique was initially utilized to improve the detection of event and error outliers in the global space as the dimension of the dataset increases in size. Then, Perceptually Important Points K-means-enabled Cosine and Manhattan (PIP-KCM) technique was employed to recover missing data to improve the elimination of duplicate sensed data records. Finally, an Efficient Resource Balance Ranking- based Glow-warm Swarm Optimization (ERBV-GSO) technique was used to resolve the local search entrapment for near-optimal solutions and to reduce workload imbalance on available resources for task execution in the IoT-Cloud IaaS platform. Experiments were carried out using the NetworkX simulator and the results of N-PSAGMM, PIP-KCM and ERBV-GSO techniques with N-PSL, PIP, ERVS and Resource Fragmentation Aware (RF-Aware) algorithms were compared. The experimental results showed that the proposed NPSAGMM, PIP-KCM, and ERBV-GSO techniques produced a tremendous performance improvement rate based on 3.602%/6.74% Precision, 9.724%/8.77% Recall, 5.350%/4.42% Area under Curve for the detection of event and error outliers. Furthermore, the results indicated an improvement rate of 94.273% F1-score, 0.143 Reduction Ratio, and with minimum 0.149% Root Mean Squared Error for redundant data elimination as well as the minimum number of 608 Virtual Machine migrations, 47.62% Resource Utilization and 41.13% load balancing degree for the allocation of desired resources deployed to execute sensory data filtering tasks respectively. Therefore, the proposed techniques have proven to be effective for improving the load balancing of allocating the desired resources to execute efficient outlier (Event and Error) detection and eliminate redundant data records in the IoT-based Cloud IaaS Infrastructure

    A Novel Hybrid Spotted Hyena-Swarm Optimization (HS-FFO) Framework for Effective Feature Selection in IOT Based Cloud Security Data

    Get PDF
    Internet of Things (IoT) has gained its major insight in terms of its deployment and applications. Since IoT exhibits more heterogeneous characteristics in transmitting the real time application data, these data are vulnerable to many security threats. To safeguard the data, machine and deep learning based security systems has been proposed. But this system suffers the computational burden that impedes threat detection capability. Hence the feature selection plays an important role in designing the complexity aware IoT systems to defend the security attacks in the system. This paper propose the novel ensemble of spotted hyena with firefly algorithm to choose the best features and minimise the redundant data features that can boost the detection system's computational effectiveness.  Firstly, an effective firefly optimized feature correlation method is developed.  Then, in order to enhance the exploration and search path, operators of fireflies are combined with Spotted Hyena to assist the swarms in leaving the regionally best solutions. The experimentation has been carried out using the different IoT cloud security datasets such as NSL-KDD-99 , UNSW and CIDCC -001 datasets and contrasted with ten cutting-edge feature extraction techniques, like PSO (particle swarm optimization), BAT, Firefly, ACO(Ant Colony Optimization), Improved PSO, CAT, RAT, Spotted Hyena, SHO and  BOC(Bee-Colony Optimization) algorithms. Results demonstrates the proposed hybrid model has achieved the better feature selection mechanism with less convergence  time and aids better for intelligent threat detection system with the high performance of detection

    Machine Learning Defence Mechanism for Securing the Cloud Environment

    Get PDF
    A computer paradigm known as ”cloud computing” offers end users on-demand, scalable, and measurable services. Today’s businesses rely heavily on computer technology for a variety of reasons, including cost savings, infrastructure, development platforms, data processing, data analytics, etc. The end users can access the cloud service providers’ (CSP) services from any location at any time using a web application. The protection of the cloud infrastructure is of the highest  significance, and several studies using a variety of technologies have been conducted to develop more effective defenses against cloud threats. In recent years, machine learning technology has shown to be more effective in securing the cloud environment. In recent years, machine learning technology has shown to be more effective in securing the cloud environment. To create models that can automate the process of identifying cloud threats with better accuracy than any other technology, machine learning algorithms are  trained  on  a  variety  of  real-world  datasets. In this study, various recent research publications that used machine learning as a defense mechanism against cloud threats are reviewed

    Edge computing infrastructure for 5G networks: a placement optimization solution

    Get PDF
    This thesis focuses on how to optimize the placement of the Edge Computing infrastructure for upcoming 5G networks. To this aim, the core contributions of this research are twofold: 1) a novel heuristic called Hybrid Simulated Annealing to tackle the NP-hard nature of the problem and, 2) a framework called EdgeON providing a practical tool for real-life deployment optimization. In more detail, Edge Computing has grown into a key solution to 5G latency, reliability and scalability requirements. By bringing computing, storage and networking resources to the edge of the network, delay-sensitive applications, location-aware systems and upcoming real-time services leverage the benefits of a reduced physical and logical path between the end-user and the data or service host. Nevertheless, the edge node placement problem raises critical concerns regarding deployment and operational expenditures (i.e., mainly due to the number of nodes to be deployed), current backhaul network capabilities and non-technical placement limitations. Common approaches to the placement of edge nodes are based on: Mobile Edge Computing (MEC), where the processing capabilities are deployed at the Radio Access Network nodes and Facility Location Problem variations, where a simplistic cost function is used to determine where to optimally place the infrastructure. However, these methods typically lack the flexibility to be used for edge node placement under the strict technical requirements identified for 5G networks. They fail to place resources at the network edge for 5G ultra-dense networking environments in a network-aware manner. This doctoral thesis focuses on rigorously defining the Edge Node Placement Problem (ENPP) for 5G use cases and proposes a novel framework called EdgeON aiming at reducing the overall expenses when deploying and operating an Edge Computing network, taking into account the usage and characteristics of the in-place backhaul network and the strict requirements of a 5G-EC ecosystem. The developed framework implements several placement and optimization strategies thoroughly assessing its suitability to solve the network-aware ENPP. The core of the framework is an in-house developed heuristic called Hybrid Simulated Annealing (HSA), seeking to address the high complexity of the ENPP while avoiding the non-convergent behavior of other traditional heuristics (i.e., when applied to similar problems). The findings of this work validate our approach to solve the network-aware ENPP, the effectiveness of the heuristic proposed and the overall applicability of EdgeON. Thorough performance evaluations were conducted on the core placement solutions implemented revealing the superiority of HSA when compared to widely used heuristics and common edge placement approaches (i.e., a MEC-based strategy). Furthermore, the practicality of EdgeON was tested through two main case studies placing services and virtual network functions over the previously optimally placed edge nodes. Overall, our proposal is an easy-to-use, effective and fully extensible tool that can be used by operators seeking to optimize the placement of computing, storage and networking infrastructure at the users’ vicinity. Therefore, our main contributions not only set strong foundations towards a cost-effective deployment and operation of an Edge Computing network, but directly impact the feasibility of upcoming 5G services/use cases and the extensive existing research regarding the placement of services and even network service chains at the edge
    • …
    corecore