198 research outputs found

    Intrusion Detection in Containerized Environments

    Get PDF
    In this paper, we present the results of using Hidden Markov Models for learning the behavior of Docker containers. This is for use in anomaly-detection based intrusion detection system. Containers provide isolation between the host system and the containerized environment by efficiently packaging applications along with their dependencies. This way, containers become a portable software environment for applications to run and scale. Unlike virtual machines, containers share the same kernel as the host operating system. This is leveraged to monitor the system calls of the container from the host system for anomaly detection. Thus, the monitoring system is not required to have any knowledge about the container nature, neither does the host system or the container being monitored need to be modified

    A Costing Framework for the Dynamic Computational Efficiency of the Network Security Detection Function

    Get PDF
    This study developed a comprehensive framework to systematically evaluate the economic implications of security policy implementation in IT-centric business processes. Focusing on the detection aspect of the NIST cybersecurity framework, the research explored the interrelation between business operations, computational efficiency, and security protocols. The framework comprises nine components, addressing the gap between cost projections and security policy enforcement. The insights provided valuable perspectives on managing security expenses and resource allocation in information security, ensuring alignment with revenue and expenditure outcomes while emphasizing the need for a comprehensive approach to cost management in information security management

    In Search of netUnicorn: A Data-Collection Platform to Develop Generalizable ML Models for Network Security Problems

    Full text link
    The remarkable success of the use of machine learning-based solutions for network security problems has been impeded by the developed ML models' inability to maintain efficacy when used in different network environments exhibiting different network behaviors. This issue is commonly referred to as the generalizability problem of ML models. The community has recognized the critical role that training datasets play in this context and has developed various techniques to improve dataset curation to overcome this problem. Unfortunately, these methods are generally ill-suited or even counterproductive in the network security domain, where they often result in unrealistic or poor-quality datasets. To address this issue, we propose an augmented ML pipeline that leverages explainable ML tools to guide the network data collection in an iterative fashion. To ensure the data's realism and quality, we require that the new datasets should be endogenously collected in this iterative process, thus advocating for a gradual removal of data-related problems to improve model generalizability. To realize this capability, we develop a data-collection platform, netUnicorn, that takes inspiration from the classic "hourglass" model and is implemented as its "thin waist" to simplify data collection for different learning problems from diverse network environments. The proposed system decouples data-collection intents from the deployment mechanisms and disaggregates these high-level intents into smaller reusable, self-contained tasks. We demonstrate how netUnicorn simplifies collecting data for different learning problems from multiple network environments and how the proposed iterative data collection improves a model's generalizability

    Scalability Assessment of Microservice Architecture Deployment Configurations: A Domain-based Approach Leveraging Operational Profiles and Load Tests

    Get PDF
    Abstract Microservices have emerged as an architectural style for developing distributed applications. Assessing the performance of architecture deployment configurations — e.g., with respect to deployment alternatives — is challenging and must be aligned with the system usage in the production environment. In this paper, we introduce an approach for using operational profiles to generate load tests to automatically assess scalability pass/fail criteria of microservice configuration alternatives. The approach provides a Domain-based metric for each alternative that can, for instance, be applied to make informed decisions about the selection of alternatives and to conduct production monitoring regarding performance-related system properties, e.g., anomaly detection. We have evaluated our approach using extensive experiments in a large bare metal host environment and a virtualized environment. First, the data presented in this paper supports the need to carefully evaluate the impact of increasing the level of computing resources on performance. Specifically, for the experiments presented in this paper, we observed that the evaluated Domain-based metric is a non-increasing function of the number of CPU resources for one of the environments under study. In a subsequent series of experiments, we investigate the application of the approach to assess the impact of security attacks on the performance of architecture deployment configurations

    Deployment and Operation of Complex Software in Heterogeneous Execution Environments

    Get PDF
    This open access book provides an overview of the work developed within the SODALITE project, which aims at facilitating the deployment and operation of distributed software on top of heterogeneous infrastructures, including cloud, HPC and edge resources. The experts participating in the project describe how SODALITE works and how it can be exploited by end users. While multiple languages and tools are available in the literature to support DevOps teams in the automation of deployment and operation steps, still these activities require specific know-how and skills that cannot be found in average teams. The SODALITE framework tackles this problem by offering modelling and smart editing features to allow those we call Application Ops Experts to work without knowing low level details about the adopted, potentially heterogeneous, infrastructures. The framework offers also mechanisms to verify the quality of the defined models, generate the corresponding executable infrastructural code, automatically wrap application components within proper execution containers, orchestrate all activities concerned with deployment and operation of all system components, and support on-the-fly self-adaptation and refactoring

    CREATING SYNTHETIC ATTACKS WITH EVOLUTIONARY ALGORITHMS FOR INDUSTRIAL-CONTROL-SYSTEM SECURITY TESTING

    Get PDF
    Cybersecurity defenders can use honeypots (decoy systems) to capture and study adversarial activities. An issue with honeypots is obtaining enough data on rare attacks. To improve data collection, we created a tool that uses machine learning to generate plausible artificial attacks on two protocols, Hypertext Transfer Protocol (HTTP) and IEC 60870-5-104 (“IEC 104” for short, an industrial-control-system protocol). It uses evolutionary algorithms to create new variants of two cyberattacks: Log4j exploits (described in CVE-2021-44228 as severely critical) and the Industroyer2 malware (allegedly used in Russian attacks on Ukrainian power grids). Our synthetic attack generator (SAGO) effectively created synthetic attacks at success rates up to 70 and 40 percent for Log4j and IEC 104, respectively. We tested over 5,200 unique variations of Log4j exploits and 256 unique variations of the approach used by Industroyer2. Based on a power-grid honeypot’s response to these attacks, we identified changes to improve interactivity, which should entice intruders to mount more revealing attacks and aid defenders in hardening against new attack variants. This work provides a technique to proactively identify cybersecurity weaknesses in critical infrastructure and Department of Defense assets.Captain, United States Marine CorpsApproved for public release. Distribution is unlimited
    • …
    corecore