4,304 research outputs found

    The Challenges in SDN/ML Based Network Security : A Survey

    Full text link
    Machine Learning is gaining popularity in the network security domain as many more network-enabled devices get connected, as malicious activities become stealthier, and as new technologies like Software Defined Networking (SDN) emerge. Sitting at the application layer and communicating with the control layer, machine learning based SDN security models exercise a huge influence on the routing/switching of the entire SDN. Compromising the models is consequently a very desirable goal. Previous surveys have been done on either adversarial machine learning or the general vulnerabilities of SDNs but not both. Through examination of the latest ML-based SDN security applications and a good look at ML/SDN specific vulnerabilities accompanied by common attack methods on ML, this paper serves as a unique survey, making a case for more secure development processes of ML-based SDN security applications.Comment: 8 pages. arXiv admin note: substantial text overlap with arXiv:1705.0056

    Automated Anomaly Detection in Virtualized Services Using Deep Packet Inspection

    Get PDF
    Virtualization technologies have proven to be important drivers for the fast and cost-efficient development and deployment of services. While the benefits are tremendous, there are many challenges to be faced when developing or porting services to virtualized infrastructure. Especially critical applications like Virtualized Network Functions must meet high requirements in terms of reliability and resilience. An important tool when meeting such requirements is detecting anomalous system components and recovering the anomaly before it turns into a fault and subsequently into a failure visible to the client. Anomaly detection for virtualized services relies on collecting system metrics that represent the normal operation state of every component and allow the usage of machine learning algorithms to automatically build models representing such state. This paper presents an approach for collecting service-layer metrics while treating services as black-boxes. This allows service providers to implement anomaly detection on the application layer without the need to modify third-party software. Deep Packet Inspection is used to analyse the traffic of virtual machines on the hypervisor layer, producing both generic and protocol-specific communication metrics. An evaluation shows that the resulting metrics represent the normal operation state of an example Virtualized Network Function and are therefore a valuable contribution to automatic anomaly detection in virtualized services

    Using Machine-Learning for the Damage Detection of Harbour Structures

    Get PDF
    The ageing infrastructure in ports requires regular inspection. This inspection is currently carried out manually by divers who sense the entire below-water infrastructure by hand. This process is cost-intensive as it involves a lot of time and human resources. To overcome these difficulties, we propose scanning the above and below-water port structure with a multi-sensor system, and by a fully automated process to classify the point cloud obtained into damaged and undamaged zones. We make use of simulated training data to test our approach because not enough training data with corresponding class labels are available yet. Accordingly, we build a rasterised height field of a point cloud of a sheet pile wall by subtracting a computer-aided design model. The latter is propagated through a convolutional neural network, which detects anomalies. We make use of two methods: the VGG19 deep neural network and local outlier factors. We showed that our approach can achieve a fully automated, reproducible, quality-controlled damage detection, which can analyse the whole structure instead of the sample-wise manual method with divers. We were able to achieve valuable results for our application. The accuracy of the proposed method is 98.8% following a desired recall of 95%. The proposed strategy is also applicable to other infrastructure objects, such as bridges and high-rise buildings

    Comparative study of machine learning algorithms for anomaly detection in Cloud infrastructure

    Get PDF
    Cloud is one of the emerging technologies in the field of computer science and is extremely popular because of its use of elastic resources to provide optimized, cost-effective and on-demand services. As technology started to grow in scale and complexity, the need for automated anomaly detection and monitoring system has become important. Inappropriate exploitation of Cloud resources can often lead to faults like crashing of VMs, decreased efficiency of cloud system etc. thereby leading to violations of the Service Level Agreement (SLA). These faults are often preceded by anomalies in the behavior of the VMs. Hence, the anomalies can be used as indicators of faults which potentially violate the SLAs. We have created a system that will monitor the VMs, detect anomalies and warn the system administrator before any problem escalates. We present in this paper a comparative study of various machine learning algorithms used for detecting anomalies in cloud

    Effective and efficient network anomaly detection system using machine learning algorithm

    Get PDF
    Network anomaly detection system enables to monitor computer network that behaves differently from the network protocol and it is many implemented in various domains. Yet, the problem arises where different application domains have different defining anomalies in their environment. These make a difficulty to choose the best algorithms that suit and fulfill the requirements of certain domains and it is not straightforward. Additionally, the issue of centralization that cause fatal destruction of network system when powerful malicious code injects in the system. Therefore, in this paper we want to conduct experiment using supervised Machine Learning (ML) for network anomaly detection system that low communication cost and network bandwidth minimized by using UNSW-NB15 dataset to compare their performance in term of their accuracy (effective) and processing time (efficient) for a classifier to build a model. Supervised machine learning taking account the important features by labelling it from the datasets. The best machine learning algorithm for network dataset is AODE with a comparable accuracy is 97.26% and time taken approximately 7 seconds. Also, distributed algorithm solves the issue of centralization with the accuracy and processing time still a considerable compared to a centralized algorithm even though a little drop of the accuracy and a bit longer time needed
    • …
    corecore