1,888 research outputs found

    Building an Emulation Environment for Cyber Security Analyses of Complex Networked Systems

    Full text link
    Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available

    Impregnable Defence Architecture using Dynamic Correlation-based Graded Intrusion Detection System for Cloud

    Get PDF
    Data security and privacy are perennial concerns related to cloud migration, whether it is about applications, business or customers. In this paper, novel security architecture for the cloud environment designed with intrusion detection and prevention system (IDPS) components as a graded multi-tier defense framework. It is a defensive formation of collaborative IDPS components with dynamically revolving alert data placed in multiple tiers of virtual local area networks (VLANs). The model has two significant contributions for impregnable protection, one is to reduce alert generation delay by dynamic correlation and the second is to support the supervised learning of malware detection through system call analysis. The defence formation facilitates malware detection with linear support vector machine- stochastic gradient descent (SVM-SGD) statistical algorithm. It requires little computational effort to counter the distributed, co-ordinated attacks efficiently. The framework design, then, takes distributed port scan attack as an example for assessing the efficiency in terms of reduction in alert generation delay, the number of false positives and learning time through comparison with existing techniques is discussed

    A Framework for Hybrid Intrusion Detection Systems

    Get PDF
    Web application security is a definite threat to the world’s information technology infrastructure. The Open Web Application Security Project (OWASP), generally defines web application security violations as unauthorized or unintentional exposure, disclosure, or loss of personal information. These breaches occur without the company’s knowledge and it often takes a while before the web application attack is revealed to the public, specifically because the security violations are fixed. Due to the need to protect their reputation, organizations have begun researching solutions to these problems. The most widely accepted solution is the use of an Intrusion Detection System (IDS). Such systems currently rely on either signatures of the attack used for the data breach or changes in the behavior patterns of the system to identify an intruder. These systems, either signature-based or anomaly-based, are readily understood by attackers. Issues arise when attacks are not noticed by an existing IDS because the attack does not fit the pre-defined attack signatures the IDS is implemented to discover. Despite current IDSs capabilities, little research has identified a method to detect all potential attacks on a system. This thesis intends to address this problem. A particular emphasis will be placed on detecting advanced attacks, such as those that take place at the application layer. These types of attacks are able to bypass existing IDSs, increase the potential for a web application security breach to occur and not be detected. In particular, the attacks under study are all web application layer attacks. Those included in this thesis are SQL injection, cross-site scripting, directory traversal and remote file inclusion. This work identifies common and existing data breach detection methods as well as the necessary improvements for IDS models. Ultimately, the proposed approach combines an anomaly detection technique measured by cross entropy and a signature-based attack detection framework utilizing genetic algorithm. The proposed hybrid model for data breach detection benefits organizations by increasing security measures and allowing attacks to be identified in less time and more efficiently

    Non-intrusive anomaly detection for encrypted networks

    Get PDF
    The use of encryption is steadily increasing. Packet payloads that are encrypted are becoming increasingly difficult to analyze using IDSs. This investigation uses a new non-intrusive IDS approach to detect network intrusions using a K-Means clustering methodology. It was found that this approach was able to detect many intrusions for these datasets while maintaining the encrypted confidentiality of packet information. This work utilized the KDD \u2799 and NSL-KDD evaluation datasets for testing

    Xanthus: Push-button Orchestration of Host Provenance Data Collection

    Get PDF
    Host-based anomaly detectors generate alarms by inspecting audit logs for suspicious behavior. Unfortunately, evaluating these anomaly detectors is hard. There are few high-quality, publicly-available audit logs, and there are no pre-existing frameworks that enable push-button creation of realistic system traces. To make trace generation easier, we created Xanthus, an automated tool that orchestrates virtual machines to generate realistic audit logs. Using Xanthus' simple management interface, administrators select a base VM image, configure a particular tracing framework to use within that VM, and define post-launch scripts that collect and save trace data. Once data collection is finished, Xanthus creates a self-describing archive, which contains the VM, its configuration parameters, and the collected trace data. We demonstrate that Xanthus hides many of the tedious (yet subtle) orchestration tasks that humans often get wrong; Xanthus avoids mistakes that lead to non-replicable experiments.Comment: 6 pages, 1 figure, 7 listings, 1 table, worksho
    • …
    corecore