519 research outputs found

    Cloud Computing cost and energy optimization through Federated Cloud SoS

    Get PDF
    2017 Fall.Includes bibliographical references.The two most significant differentiators amongst contemporary Cloud Computing service providers have increased green energy use and datacenter resource utilization. This work addresses these two issues from a system's architectural optimization viewpoint. The proposed approach herein, allows multiple cloud providers to utilize their individual computing resources in three ways by: (1) cutting the number of datacenters needed, (2) scheduling available datacenter grid energy via aggregators to reduce costs and power outages, and lastly by (3) utilizing, where appropriate, more renewable and carbon-free energy sources. Altogether our proposed approach creates an alternative paradigm for a Federated Cloud SoS approach. The proposed paradigm employs a novel control methodology that is tuned to obtain both financial and environmental advantages. It also supports dynamic expansion and contraction of computing capabilities for handling sudden variations in service demand as well as for maximizing usage of time varying green energy supplies. Herein we analyze the core SoS requirements, concept synthesis, and functional architecture with an eye on avoiding inadvertent cascading conditions. We suggest a physical architecture that diminishes unwanted outcomes while encouraging desirable results. Finally, in our approach, the constituent cloud services retain their independent ownership, objectives, funding, and sustainability means. This work analyzes the core SoS requirements, concept synthesis, and functional architecture. It suggests a physical structure that simulates the primary SoS emergent behavior to diminish unwanted outcomes while encouraging desirable results. The report will analyze optimal computing generation methods, optimal energy utilization for computing generation as well as a procedure for building optimal datacenters using a unique hardware computing system design based on the openCompute community as an illustrative collaboration platform. Finally, the research concludes with security features cloud federation requires to support to protect its constituents, its constituents tenants and itself from security risks

    Rule-Based Security Monitoring of Containerized Environments

    Get PDF
    Containers have to be secured in a multi-tenant environment. To secure the use of containerized environments, the effectiveness of a rule-based security monitoring approach have been investigated. The approach of this paper can be used to detect a wide range of potentially malicious behaviour of workloads in containerized environments. Additionally is able to monitor the actual container runtime for misuse and misconfiguration. In order to evaluate the detection capabilities of the open-source tools utilized in a container, various scenarios of undesired behaviour are closely examined. In addition, the performance overhead and functional limitations associated with workload monitoring are discussed. The proposed approach is effective in many of the scenarios examined and its performance overhead is adequate, if appropriate event filtering is applied

    Security, Performance and Energy Trade-offs of Hardware-assisted Memory Protection Mechanisms

    Full text link
    The deployment of large-scale distributed systems, e.g., publish-subscribe platforms, that operate over sensitive data using the infrastructure of public cloud providers, is nowadays heavily hindered by the surging lack of trust toward the cloud operators. Although purely software-based solutions exist to protect the confidentiality of data and the processing itself, such as homomorphic encryption schemes, their performance is far from being practical under real-world workloads. The performance trade-offs of two novel hardware-assisted memory protection mechanisms, namely AMD SEV and Intel SGX - currently available on the market to tackle this problem, are described in this practical experience. Specifically, we implement and evaluate a publish/subscribe use-case and evaluate the impact of the memory protection mechanisms and the resulting performance. This paper reports on the experience gained while building this system, in particular when having to cope with the technical limitations imposed by SEV and SGX. Several trade-offs that provide valuable insights in terms of latency, throughput, processing time and energy requirements are exhibited by means of micro- and macro-benchmarks.Comment: European Commission Project: LEGaTO - Low Energy Toolset for Heterogeneous Computing (EC-H2020-780681

    Cloud-native RStudio on Kubernetes for Hopsworks

    Full text link
    In order to fully benefit from cloud computing, services are designed following the "multi-tenant" architectural model, which is aimed at maximizing resource sharing among users. However, multi-tenancy introduces challenges of security, performance isolation, scaling, and customization. RStudio server is an open-source Integrated Development Environment (IDE) accessible over a web browser for the R programming language. We present the design and implementation of a multi-user distributed system on Hopsworks, a data-intensive AI platform, following the multi-tenant model that provides RStudio as Software as a Service (SaaS). We use the most popular cloud-native technologies: Docker and Kubernetes, to solve the problems of performance isolation, security, and scaling that are present in a multi-tenant environment. We further enable secure data sharing in RStudio server instances to provide data privacy and allow collaboration among RStudio users. We integrate our system with Apache Spark, which can scale and handle Big Data processing workloads. Also, we provide a UI where users can provide custom configurations and have full control of their own RStudio server instances. Our system was tested on a Google Cloud Platform cluster with four worker nodes, each with 30GB of RAM allocated to them. The tests on this cluster showed that 44 RStudio servers, each with 2GB of RAM, can be run concurrently. Our system can scale out to potentially support hundreds of concurrently running RStudio servers by adding more resources (CPUs and RAM) to the cluster or system.Comment: 8 pages, 4 figure

    Scalability evaluation of VPN technologies for secure container networking

    Get PDF
    For years, containers have been a popular choice for lightweight virtualization in the cloud. With the rise of more powerful and flexible edge devices, container deployment strategies have arisen that leverage the computational power of edge devices for optimal workload distribution. This move from a secure data center network to heterogenous public and private networks presents some issues in terms of security and network topology that can be partially solved by using a Virtual Private Network (VPN) to connect edge nodes to the cloud. In this paper, the scalability of VPN software is evaluated to determine if and how it can be used in large-scale clusters containing edge nodes. Benchmarks are performed to determine the maximum number of VPN-connected nodes and the influence of network degradation on VPN performance, primarily using traffic typical for edge devices generating IoT data. Some high level conclusions are drawn from the results, indicating that WireGuard is an excellent choice of VPN software to connect edge nodes in a cluster. Analysis of the results also shows the strengths and weaknesses of other VPN software
    • …
    corecore