37 research outputs found

    Technical Report on Deploying a highly secured OpenStack Cloud Infrastructure using BradStack as a Case Study

    Full text link
    Cloud computing has emerged as a popular paradigm and an attractive model for providing a reliable distributed computing model.it is increasing attracting huge attention both in academic research and industrial initiatives. Cloud deployments are paramount for institution and organizations of all scales. The availability of a flexible, free open source cloud platform designed with no propriety software and the ability of its integration with legacy systems and third-party applications are fundamental. Open stack is a free and opensource software released under the terms of Apache license with a fragmented and distributed architecture making it highly flexible. This project was initiated and aimed at designing a secured cloud infrastructure called BradStack, which is built on OpenStack in the Computing Laboratory at the University of Bradford. In this report, we present and discuss the steps required in deploying a secured BradStack Multi-node cloud infrastructure and conducting Penetration testing on OpenStack Services to validate the effectiveness of the security controls on the BradStack platform. This report serves as a practical guideline, focusing on security and practical infrastructure related issues. It also serves as a reference for institutions looking at the possibilities of implementing a secured cloud solution.Comment: 38 pages, 19 figures

    NetGlance NMS - An integrated network monitoring system

    Get PDF
    Mestrado de dupla diplomação com a Kuban State Agrarian UniversityThis work is about IT infrastructure and, in particular, computer networks in KubSAU and IPB. Also, it is about a network monitoring system “NetGlance NMS” developed for KubSAU System Administration Department. Work objective is to optimize the information structure for KubSAU and IPB. During the work, following tasks were completed: Research the existing IPB information structure, Compare the information structure for KubSAU and IPB, Model the IPB computer network (topology, services), Research bottlenecks and potential pitfalls in the data-center and in the computer network of IPB, Research information security mechanisms in the computer network of IPB, Organize monitoring process for the computer network in KubSAU. The most important impact of the work is an increasing network productivity and user experience as a result of creation and deploy a monitoring software.O trabalho descrito no âmbito desta dissertação incide sobre a infraestrutura TI e, em particular, sobre as redes de computadores da KubSAU e do IPB. Além disso, descreve-se um sistema de gestão integrada de redes, designada “NetGlance NMS”, desenvolvido para o Departamento de Administração de Sistemas da KubSAU. O objetivo do trabalho é desenvolver uma ferramenta para otimizar a gestão da estrutura de comunicações das duas instituições. Durante o trabalho, as seguintes tarefas foram concluídas: levantamento da estrutura de comunicações do IPB, comparação da estrutura de comunicações entre a KubSAU e o IPB, modelação da rede de comunicações do IPB (topologia, serviços), estudo de possíveis estrangulamentos no datacenter e na rede de comunicações doIPB, estudo de mecanismos de segurança na rede de comunicações do IPB, organização do processo de monitorização da rede de comunicações da KubSAU. O contributo mais relevante deste trabalho é o desenvolvimento de uma aplicação de gestão integrada de redes, de forma a contribuir para o aumento da produtividade da rede e da experiência dos utilizadores

    Automated Verification of Virtualized Infrastructures

    Get PDF
    Virtualized infrastructures and clouds present new challenges for security analysis and formal verification: they are complex environments that continuously change their shape, and that give rise to non-trivial security goals such as isolation and failure resilience requirements. We present a platform that connects declarative and expressive description languages with state-of-the art verification methods. The languages integrate homogeneously descriptions of virtualized infras-tructures, their transformations, their desired goals, and evaluation strategies. The different verification tools range from model checking to theorem proving; this allows us to exploit the complementary strengths of methods, and also to understand how to best represent the analysis problems in different contexts. We consider first the static case where the topology of the virtual infrastructure is fixed and demonstrate that our platform allows for the declarative specification of a large class of properties. Even though tools that are special-ized to checking particular properties perform better than our generic approach, we show with a real-world case study that our approach is practically feasible. We finally consider also the dynamic case where the intruder can actively change the topology (by migrating machines). The combination of a complex topology and changes to it by an intruder is a problem that lies beyond the scope of previous analysis tools and to which we can give first positive verification results

    Facilitating dynamic network control with software-defined networking

    Get PDF
    This dissertation starts by realizing that network management is a very complex and error-prone task. The major causes are identified through interviews and systematic analysis of network config- uration data on two large campus networks. This dissertation finds that network events and dynamic reactions to them should be programmatically encoded in the network control program by opera- tors, and some events should be automatically handled for them if the desired reaction is general. This dissertation presents two new solutions for managing and configuring networks using Software- Defined Networking (SDN) paradigm: Kinetic and Coronet. Kinetic is a programming language and central control platform that allows operators to implement traffic control application that reacts to various kinds of network events in a concise, intuitive way. The event-reaction logic is checked for correction before deployment to prevent misconfigurations. Coronet is a data-plane failure recovery service for arbitrary SDN control applications. Coronet pre-plans primary and backup routing paths for any given topology. Such pre-planning guarantees that Coronet can perform fast recovery when there is failure. Multiple techniques are used to ensure that the solution scales to large networks with more than 100 switches. Performance and usability evaluations show that both solutions are feasible and are great alternative solutions to current mechanisms to reduce misconfigurations.Ph.D

    Automating Cyber Analytics

    Get PDF
    Model based security metrics are a growing area of cyber security research concerned with measuring the risk exposure of an information system. These metrics are typically studied in isolation, with the formulation of the test itself being the primary finding in publications. As a result, there is a flood of metric specifications available in the literature but a corresponding dearth of analyses verifying results for a given metric calculation under different conditions or comparing the efficacy of one measurement technique over another. The motivation of this thesis is to create a systematic methodology for model based security metric development, analysis, integration, and validation. In doing so we hope to fill a critical gap in the way we view and improve a system’s security. In order to understand the security posture of a system before it is rolled out and as it evolves, we present in this dissertation an end to end solution for the automated measurement of security metrics needed to identify risk early and accurately. To our knowledge this is a novel capability in design time security analysis which provides the foundation for ongoing research into predictive cyber security analytics. Modern development environments contain a wealth of information in infrastructure-as-code repositories, continuous build systems, and container descriptions that could inform security models, but risk evaluation based on these sources is ad-hoc at best, and often simply left until deployment. Our goal in this work is to lay the groundwork for security measurement to be a practical part of the system design, development, and integration lifecycle. In this thesis we provide a framework for the systematic validation of the existing security metrics body of knowledge. In doing so we endeavour not only to survey the current state of the art, but to create a common platform for future research in the area to be conducted. We then demonstrate the utility of our framework through the evaluation of leading security metrics against a reference set of system models we have created. We investigate how to calibrate security metrics for different use cases and establish a new methodology for security metric benchmarking. We further explore the research avenues unlocked by automation through our concept of an API driven S-MaaS (Security Metrics-as-a-Service) offering. We review our design considerations in packaging security metrics for programmatic access, and discuss how various client access-patterns are anticipated in our implementation strategy. Using existing metric processing pipelines as reference, we show how the simple, modular interfaces in S-MaaS support dynamic composition and orchestration. Next we review aspects of our framework which can benefit from optimization and further automation through machine learning. First we create a dataset of network models labeled with the corresponding security metrics. By training classifiers to predict security values based only on network inputs, we can avoid the computationally expensive attack graph generation steps. We use our findings from this simple experiment to motivate our current lines of research into supervised and unsupervised techniques such as network embeddings, interaction rule synthesis, and reinforcement learning environments. Finally, we examine the results of our case studies. We summarize our security analysis of a large scale network migration, and list the friction points along the way which are remediated by this work. We relate how our research for a large-scale performance benchmarking project has influenced our vision for the future of security metrics collection and analysis through dev-ops automation. We then describe how we applied our framework to measure the incremental security impact of running a distributed stream processing system inside a hardware trusted execution environment

    Käyttöjärjestelmän kovennuskonfiguraation hallinta automaatioympäristössä

    Get PDF
    Hardening improves security by removing unnecessary features from the system. Hardening can be performed for a network, a device, an operating system and single applications. As virtualization is added, the virtualization environment must also be hardened. In this thesis, the focus is on operating system hardening and its management. Frequent operating system updates cause system changes that make hardening management challenging. System hardening is presented using the ICS lifecycle model. This includes tasks, such as designing of the hardening configuration, implementation and testing, and maintaining the system hardening. To make implementation and maintaining of the hardening configuration possible two PowerShell scripts are made. One for automating hardening and other for auditing of Windows hosts. The scripts use a new hardening configuration template which is designed in this thesis. As a result, effective scripts were implemented, though some features had to be dropped due to lack of proper tools. Discarded features and other development ideas are presented in further development section. Additionally, several challenges for hardening and using Windows 10 in control systems, are observed in this thesis. Most notable discovery is that Windows 10 restores hardened settings and even broke the operation of system without any apparent reason. For this reason, the hardening configuration should be monitored and its management continued through the systems lifecycle

    Automated Security Analysis of Virtualized Infrastructures

    Get PDF
    Virtualization enables the increasing efficiency and elasticity of modern IT infrastructures, including Infrastructure as a Service. However, the operational complexity of virtualized infrastructures is high, due to their dynamics, multi-tenancy, and size. Misconfigurations and insider attacks carry significant operational and security risks, such as breaches in tenant isolation, which put both the infrastructure provider and tenants at risk. In this thesis we study the question if it is possible to model and analyze complex, scalable, and dynamic virtualized infrastructures with regard to user-defined security and operational policies in an automated way. We establish a new practical and automated security analysis framework for virtualized infrastructures. First, we propose a novel tool that automatically extracts the configuration of heterogeneous environments and builds up a unified graph model of the configuration and topology. The tool is further extended with a monitoring component and a set of algorithms that translates system changes to graph model changes. The benefits of maintaining such a dynamic model are time reduction for model population and closing the gap for transient security violations. Our analysis is the first that lifts static information flow analysis to the entire virtualized infrastructure, in order to detect isolation failures between tenants on all resources. The analysis is configurable using customized rules to reflect the different trust assumptions of the users. We apply and evaluate our analysis system on the production infrastructure of a global financial institution. For the information flow analysis of dynamic infrastructures we propose the concept of dynamic rule-based information flow graphs and develop a set of algorithms that maintain such information flow graphs for dynamic system models. We generalize the analysis of isolation properties and establish a new generic analysis platform for virtualized infrastructures that allows to express a diverse set of security and operational policies in a formal language. The policy requirements are studied in a case-study with a cloud service provider. We are the first to employ a variety of theorem provers and model checkers to verify the state of a virtualized infrastructure against its policies. Additionally, we analyze dynamic behavior such as VM migrations. For the analysis of dynamic infrastructures we pursue both a reactive as well as a proactive approach. A reactive analysis system is developed that reduces the time between system change and analysis result. The system monitors the infrastructure for changes and employs dynamic information flow graphs to verify, for instance, tenant isolation. For the proactive analysis we propose a new model, the Operations Transition Model, which captures the changes of operations in the virtualized infrastructure as graph transformations. We build a novel analysis system using this model that performs automated run-time analysis of operations and also offers change planning. The operations transition model forms the basis for further research in model checking of virtualized infrastructures

    Facilitating Data Driven Research Through a Hardware- and Software-Based Cyberinfrastructure Architecture

    Get PDF
    Cyberinfrastructure is the backbone of research and modern industry. As such, to have an environment conducive to research advancements, cyberinfrastructure must be well maintained and accessible by all researchers. Presented in this thesis is a method of centralizing aspects of cyberinfrastructure to allow for ease of collaboration and data management by researchers without requiring these researchers to manage the involved systems themselves. This centralized architecture includes dedicated machines for data transfers, a cluster designed to run microservices surrounding the method, a dashboard for performance and health monitoring, and network telemetry collection. As system administrators are responsible for maintaining the systems in place, a user study was conducted to assess the functionality of the dashboard they would utilize to receive alerts from and utilize to quickly gauge the status of involved hardware. This thesis aims to provide a template for deploying centralized data transfer cyberinfrastructure and a manual for utilizing these systems to support data driven research

    Bearicade: A Novel High-Performance Computing User and Security Management System Augmented with Machine Learning Technology

    Get PDF
    Despite the rising development and popularity of HPC systems, there have been insufficient advancements towards the security of HPC systems. The substantial computational power, high bandwidth networks, and massive storage capacity provided in the HPC environment are desirable targets for the attackers. The majority of educational institution HPC centres provide their users with simple access methods lacking the modern security needs. Thus, accelerating the systems’ proneness to modern cyber-attacks. The current implementations of HPC access points, such as web portals, offer users direct access to the HPC systems. Consequently, such web portal implementations affect the HPC system with the same security challenges faced by cloud providers and web applications. Although attempts have been made toward securing HPC systems, most of these implementations are outdated, insufficient with the current security standards, or do not integrate well with modern HPC access solutions. To address these security issues, Bearicade, a novel High-Performance Computing (HPC) user and security management system, was designed, developed, implemented and evaluated. Bearicade is a data-driven secure unified framework for managing HPC users and systems security. This framework is an add-on layer to an existing HPC systems software, collecting over 50 different types of information from multiple sources within the HPC systems. It offers Artificial Intelligent security solutions with an added usability and accessibility without adversely affecting the performance and functionality of HPC systems. Throughout this study, the security and usability of Bearicade were validated implementing multiple Machine Learning models. It has been deployed over three years as a production system for students and researchers at the University of Huddersfield QueensGate Grid (QGG) with considerable success, protecting the QGG systems from the summer 2020 attacks that has affected many other HPC systems in research and educational establishments
    corecore