38,435 research outputs found

    Leveraging OpenStack and Ceph for a Controlled-Access Data Cloud

    Full text link
    While traditional HPC has and continues to satisfy most workflows, a new generation of researchers has emerged looking for sophisticated, scalable, on-demand, and self-service control of compute infrastructure in a cloud-like environment. Many also seek safe harbors to operate on or store sensitive and/or controlled-access data in a high capacity environment. To cater to these modern users, the Minnesota Supercomputing Institute designed and deployed Stratus, a locally-hosted cloud environment powered by the OpenStack platform, and backed by Ceph storage. The subscription-based service complements existing HPC systems by satisfying the following unmet needs of our users: a) on-demand availability of compute resources, b) long-running jobs (i.e., >30> 30 days), c) container-based computing with Docker, and d) adequate security controls to comply with controlled-access data requirements. This document provides an in-depth look at the design of Stratus with respect to security and compliance with the NIH's controlled-access data policy. Emphasis is placed on lessons learned while integrating OpenStack and Ceph features into a so-called "walled garden", and how those technologies influenced the security design. Many features of Stratus, including tiered secure storage with the introduction of a controlled-access data "cache", fault-tolerant live-migrations, and fully integrated two-factor authentication, depend on recent OpenStack and Ceph features.Comment: 7 pages, 5 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Time Driven Priority Router Implementation and First Experiments

    Get PDF
    This paper reports on the implementation of Time-Driven Priority (TDP) scheduling on a FreeBSD platform. This work is part of a TDP prototyping and demonstration project aimed at showing the implications of TDP deployment in packet-switched networks, especially benefits for real-time applications. This paper focuses on practical aspects related to the implementation of the technology on a Personal Computer (PC)-based router and presents the experimental results obtained on a testbed network. The basic building blocks of a TDP router are described and implementation choices are discussed. The relevant results achieved and here presented can be categorized into two types: qualitative results, including the successful integration of all needed blocks and the insight obtained on the complexity related to the implementation of a TDP router, and quantitative ones, including measures of achievable network utilization and of jitter experienced on a fully-loaded TDP network. The outcome demonstrates the effectiveness of the presented implementation while confirming TDP points of strengt

    The Regulation of Transport Price Competition

    Get PDF

    Differentiated Predictive Fair Service for TCP Flows

    Full text link
    The majority of the traffic (bytes) flowing over the Internet today have been attributed to the Transmission Control Protocol (TCP). This strong presence of TCP has recently spurred further investigations into its congestion avoidance mechanism and its effect on the performance of short and long data transfers. At the same time, the rising interest in enhancing Internet services while keeping the implementation cost low has led to several service-differentiation proposals. In such service-differentiation architectures, much of the complexity is placed only in access routers, which classify and mark packets from different flows. Core routers can then allocate enough resources to each class of packets so as to satisfy delivery requirements, such as predictable (consistent) and fair service. In this paper, we investigate the interaction among short and long TCP flows, and how TCP service can be improved by employing a low-cost service-differentiation scheme. Through control-theoretic arguments and extensive simulations, we show the utility of isolating TCP flows into two classes based on their lifetime/size, namely one class of short flows and another of long flows. With such class-based isolation, short and long TCP flows have separate service queues at routers. This protects each class of flows from the other as they possess different characteristics, such as burstiness of arrivals/departures and congestion/sending window dynamics. We show the benefits of isolation, in terms of better predictability and fairness, over traditional shared queueing systems with both tail-drop and Random-Early-Drop (RED) packet dropping policies. The proposed class-based isolation of TCP flows has several advantages: (1) the implementation cost is low since it only requires core routers to maintain per-class (rather than per-flow) state; (2) it promises to be an effective traffic engineering tool for improved predictability and fairness for both short and long TCP flows; and (3) stringent delay requirements of short interactive transfers can be met by increasing the amount of resources allocated to the class of short flows.National Science Foundation (CAREER ANI-0096045, MRI EIA-9871022

    Supporting Excess Real-Time Traffic With Active Drop Queue

    Get PDF
    Real-time applications often stand to benefit from service guarantees, and in particular delay guarantees. However, most mechanisms that provide delay guarantees also hard-limit the amount of traffic the application can generate, i.e., to enforce to a traffic contract. This can be a significant constraint and interfere with the operation of many real-time applications. Our purpose in this paper is to propose and investigate solutions that overcome this limitation. We have four major goals: 1) guarantee a delay bound to a contracted amount of real-time traffic; 2)transmit with the same delay bound as many excess real-time packets as possible; 3) enforce a given link sharing ratio between excess real-time traffic and other service classes, e.g., best-effort; and 4) preserve the ordering of real-time packets, if required. Our approach is based on a combination of buffer management and scheduling mechanisms for both guaranteeing delay bounds, while allowing the transmission of excess traffic. We evaluate the cost of our scheme by measuring the processing overhead of an actual implementation, and we investigate its performance by means of simulations using video traffic traces

    Final report on the evaluation of RRM/CRRM algorithms

    Get PDF
    Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin
    corecore