4 research outputs found

    Features Extraction on IoT Intrusion Detection System Using Principal Components Analysis (PCA)

    Get PDF
    There are several ways to increase detection accuracy result on the intrusion detection systems (IDS), one way is feature extraction. The existing original features are filtered and then converted into features with lower dimension. This paper uses the Principal Components Analysis (PCA) for features extraction on intrusion detection system with the aim to improve the accuracy and precision of the detection. The impact of features extraction to attack detection was examined. Experiments on a network traffic dataset created from an Internet of Thing (IoT) testbed network topology were conducted and the results show that the accuracy of the detection reaches 100 percent

    Time Efficiency on Computational Performance of PCA, FA and TSVD on Ransomware Detection

    Get PDF
    Ransomware is able to attack and take over access of the targeted user'scomputer. Then the hackers demand a ransom to restore the user's accessrights. Ransomware detection process especially in big data has problems interm of computational processing time or detection speed. Thus, it requires adimensionality reduction method for computational process efficiency. Thisresearch work investigates the efficiency of three dimensionality reductionmethods, i.e.: Principal Component Analysis (PCA), Factor Analysis (FA) andTruncated Singular Value Decomposition (TSVD). Experimental results onCICAndMal2017 dataset show that PCA is the fastest and most significantmethod in the computational process with average detection time of 34.33s.Furthermore, result of accuracy, precision and recall also show that the PCAis superior compared to FA and TSVD

    Time efficiency on computational performance of PCA, FA and TSVD on ransomware detection

    Get PDF
    Ransomware is able to attack and take over access of the targeted user's computer. Then the hackers demand a ransom to restore the user's access rights. Ransomware detection process especially in big data has problems in term of computational processing time or detection speed. Thus, it requires a dimensionality reduction method for computational process efficiency. This research work investigates the efficiency of three dimensionality reduction methods, i.e.: Principal Component Analysis (PCA), Factor Analysis (FA) and Truncated Singular Value Decomposition (TSVD). Experimental results on CICAndMal2017 dataset show that PCA is the fastest and most significant method in the computational process with average detection time of 34.33s. Furthermore, result of accuracy, precision and recall also show that the PCA is superior compared to FA and TSVD

    Improving efficiency and resilience in large-scale computing systems through analytics and data-driven management

    Full text link
    Applications running in large-scale computing systems such as high performance computing (HPC) or cloud data centers are essential to many aspects of modern society, from weather forecasting to financial services. As the number and size of data centers increase with the growing computing demand, scalable and efficient management becomes crucial. However, data center management is a challenging task due to the complex interactions between applications, middleware, and hardware layers such as processors, network, and cooling units. This thesis claims that to improve robustness and efficiency of large-scale computing systems, significantly higher levels of automated support than what is available in today's systems are needed, and this automation should leverage the data continuously collected from various system layers. Towards this claim, we propose novel methodologies to automatically diagnose the root causes of performance and configuration problems and to improve efficiency through data-driven system management. We first propose a framework to diagnose software and hardware anomalies that cause undesired performance variations in large-scale computing systems. We show that by training machine learning models on resource usage and performance data collected from servers, our approach successfully diagnoses 98% of the injected anomalies at runtime in real-world HPC clusters with negligible computational overhead. We then introduce an analytics framework to address another major source of performance anomalies in cloud data centers: software misconfigurations. Our framework discovers and extracts configuration information from cloud instances such as containers or virtual machines. This is the first framework to provide comprehensive visibility into software configurations in multi-tenant cloud platforms, enabling systematic analysis for validating the correctness of software configurations. This thesis also contributes to the design of robust and efficient system management methods that leverage continuously monitored resource usage data. To improve performance under power constraints, we propose a workload- and cooling-aware power budgeting algorithm that distributes the available power among servers and cooling units in a data center, achieving up to 21% improvement in throughput per Watt compared to the state-of-the-art. Additionally, we design a network- and communication-aware HPC workload placement policy that reduces communication overhead by up to 30% in terms of hop-bytes compared to existing policies.2019-07-02T00:00:00
    corecore