1,825 research outputs found

    Technical Report on Deploying a highly secured OpenStack Cloud Infrastructure using BradStack as a Case Study

    Full text link
    Cloud computing has emerged as a popular paradigm and an attractive model for providing a reliable distributed computing model.it is increasing attracting huge attention both in academic research and industrial initiatives. Cloud deployments are paramount for institution and organizations of all scales. The availability of a flexible, free open source cloud platform designed with no propriety software and the ability of its integration with legacy systems and third-party applications are fundamental. Open stack is a free and opensource software released under the terms of Apache license with a fragmented and distributed architecture making it highly flexible. This project was initiated and aimed at designing a secured cloud infrastructure called BradStack, which is built on OpenStack in the Computing Laboratory at the University of Bradford. In this report, we present and discuss the steps required in deploying a secured BradStack Multi-node cloud infrastructure and conducting Penetration testing on OpenStack Services to validate the effectiveness of the security controls on the BradStack platform. This report serves as a practical guideline, focusing on security and practical infrastructure related issues. It also serves as a reference for institutions looking at the possibilities of implementing a secured cloud solution.Comment: 38 pages, 19 figures

    Experimental Study and Performance Analysis of Cloud Computing Architectures for Industrial Control Systems

    Get PDF
    This thesis proposes an Open-Source Cloud Computing Infrastructure (OpenStack) based cloud computing architecture for industrial control systems called OpenStack-supported virtualized controller. The underlying virtualization technology is QEMU and Real-Time Kernel-based Virtual Machine (KVM-rt). After literature research, practical integration, and systematic experiments and evaluation, the feasibility of the OpenStack-supported virtualized controller has been verified. During the verification, the OpenStack-supported virtualized controller's Key Performance Indicator (KPI) is the control-loop latency. The communication between the OpenStack-supported virtualized controller and the control target is carried over a User Datagram Protocol (UDP) based industrial control protocol called Network Variables. Both wired networks (e.g., Industrial Ethernet) and wireless networks (e.g., Wi-Fi 6) between the OpenStack-supported virtualized controller and the control target are covered. After analysis of the experiment results, three factors that could significantly impact the performance of the OpenStack0supported virtualized controller have been identified. They are the network medium, the number of the Virtual Central Processing Units (vCPUs ) of OpenStack Virtual Machine (VM), and the cycle time set for the OpenStack-supported virtualized controller. Furthermore, a more advanced architecture than the OpenStack-supported virtualized controller has been foreseen. More specifically, it is an OpenStack and Kubernetes-based cloud computing architecture called OpenStack-supported containerized controller in this thesis. Both virtualization and containerization technologies are applied to the OpenStack-supported containerized controller. The virtualization components are QEMU and KVM-rt, and the containerization tool is Docker Engine. As the software Programmable Logic Controller (PLC) used in this thesis does not officially support containerization, some strategies have been used to bypass the restrictions. Rough experiments have been conducted to verify the feasibility of the OpenStack-supported containerized controller. Similar to the OpenStack-supported virtualized controller, the KPI is the control-loop latency. The communication between the OpenStack-supported containerized controller and the control target is carried over a UDP based industrial control protocol called Network Variables. Both wired networks (e.g., Industrial Ethernet) and wireless networks (e.g., Wi-Fi 6) between the OpenStack-supported containerized controller and the control target are covered. The experiment results have confirmed the feasibility of applying containerization to industrial control systems. Thus, the OpenStack-supported containerized controller could be put into practice in the future once the software PLC officially supports containerization

    On the Fly Orchestration of Unikernels: Tuning and Performance Evaluation of Virtual Infrastructure Managers

    Full text link
    Network operators are facing significant challenges meeting the demand for more bandwidth, agile infrastructures, innovative services, while keeping costs low. Network Functions Virtualization (NFV) and Cloud Computing are emerging as key trends of 5G network architectures, providing flexibility, fast instantiation times, support of Commercial Off The Shelf hardware and significant cost savings. NFV leverages Cloud Computing principles to move the data-plane network functions from expensive, closed and proprietary hardware to the so-called Virtual Network Functions (VNFs). In this paper we deal with the management of virtual computing resources (Unikernels) for the execution of VNFs. This functionality is performed by the Virtual Infrastructure Manager (VIM) in the NFV MANagement and Orchestration (MANO) reference architecture. We discuss the instantiation process of virtual resources and propose a generic reference model, starting from the analysis of three open source VIMs, namely OpenStack, Nomad and OpenVIM. We improve the aforementioned VIMs introducing the support for special-purpose Unikernels and aiming at reducing the duration of the instantiation process. We evaluate some performance aspects of the VIMs, considering both stock and tuned versions. The VIM extensions and performance evaluation tools are available under a liberal open source licence

    Leveraging OpenStack and Ceph for a Controlled-Access Data Cloud

    Full text link
    While traditional HPC has and continues to satisfy most workflows, a new generation of researchers has emerged looking for sophisticated, scalable, on-demand, and self-service control of compute infrastructure in a cloud-like environment. Many also seek safe harbors to operate on or store sensitive and/or controlled-access data in a high capacity environment. To cater to these modern users, the Minnesota Supercomputing Institute designed and deployed Stratus, a locally-hosted cloud environment powered by the OpenStack platform, and backed by Ceph storage. The subscription-based service complements existing HPC systems by satisfying the following unmet needs of our users: a) on-demand availability of compute resources, b) long-running jobs (i.e., >30> 30 days), c) container-based computing with Docker, and d) adequate security controls to comply with controlled-access data requirements. This document provides an in-depth look at the design of Stratus with respect to security and compliance with the NIH's controlled-access data policy. Emphasis is placed on lessons learned while integrating OpenStack and Ceph features into a so-called "walled garden", and how those technologies influenced the security design. Many features of Stratus, including tiered secure storage with the introduction of a controlled-access data "cache", fault-tolerant live-migrations, and fully integrated two-factor authentication, depend on recent OpenStack and Ceph features.Comment: 7 pages, 5 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Algorithms for advance bandwidth reservation in media production networks

    Get PDF
    Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results
    corecore