3,828 research outputs found

    Scalable Bandwidth Management in Software-Defined Networks

    Get PDF
    There has been a growing demand to manage bandwidth as the network traffic increases. Network applications such as real time video streaming, voice over IP and video conferencing in IP networks has risen rapidly over the recently and is projected to continue in the future. These applications consume a lot of bandwidth resulting in increasing pressure on the networks. In dealing with such challenges, modern networks must be designed to be application sensitive and be able to offer Quality of Service (QoS) based on application requirements. Network paradigms such as Software Defined Networking (SDN) allows for direct network programmability to change the network behavior to suit the application needs in order to provide solutions to the challenge. In this dissertation, the objective is to research if SDN can provide scalable QoS requirements to a set of dynamic traffic flows. Methods are implemented to attain scalable bandwidth management to provide high QoS with SDN. Differentiated Services Code Point (DSCP) values and DSCP remarking with Meters are used to implement high QoS requirements such that bandwidth guarantee is provided to a selected set of traffic flows. The theoretical methodology is implemented for achieving QoS, experiments are conducted to validate and illustrate that QoS can be implemented in SDN, but it is unable to implement High QoS due to the lack of implementation for Meters with DSCP remarking. The research work presented in this dissertation aims at the identification and addressing the critical aspects related to the SDN based QoS provisioning using flow aggregation techniques. Several tests and demonstrations will be conducted by utilizing virtualization methods. The tests are aimed at supporting the proposed ideas and aims at creating an improved understanding of the practical SDN use cases and the challenges that emerge in virtualized environments. DiffServ Assured Forwarding is chosen as a QoS architecture for implementation. The bandwidth management scalability in SDN is proved based on throughput analysis by considering two conditions i.e 1) Per-flow QoS operation and 2) QoS by using DiffServ operation in the SDN environment with Ryu controller. The result shows that better performance QoS and bandwidth management is achieved using the QoS by DiffServ operation in SDN rather than the per-flow QoS operation

    Deploying Jupyter Notebooks at scale on XSEDE resources for Science Gateways and workshops

    Full text link
    Jupyter Notebooks have become a mainstream tool for interactive computing in every field of science. Jupyter Notebooks are suitable as companion applications for Science Gateways, providing more flexibility and post-processing capability to the users. Moreover they are often used in training events and workshops to provide immediate access to a pre-configured interactive computing environment. The Jupyter team released the JupyterHub web application to provide a platform where multiple users can login and access a Jupyter Notebook environment. When the number of users and memory requirements are low, it is easy to setup JupyterHub on a single server. However, setup becomes more complicated when we need to serve Jupyter Notebooks at scale to tens or hundreds of users. In this paper we will present three strategies for deploying JupyterHub at scale on XSEDE resources. All options share the deployment of JupyterHub on a Virtual Machine on XSEDE Jetstream. In the first scenario, JupyterHub connects to a supercomputer and launches a single node job on behalf of each user and proxies back the Notebook from the computing node back to the user's browser. In the second scenario, implemented in the context of a XSEDE consultation for the IRIS consortium for Seismology, we deploy Docker in Swarm mode to coordinate many XSEDE Jetstream virtual machines to provide Notebooks with persistent storage and quota. In the last scenario we install the Kubernetes containers orchestration framework on Jetstream to provide a fault-tolerant JupyterHub deployment with a distributed filesystem and capability to scale to thousands of users. In the conclusion section we provide a link to step-by-step tutorials complete with all the necessary commands and configuration files to replicate these deployments.Comment: 7 pages, 3 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Configurable Strategies for Work-stealing

    Full text link
    Work-stealing systems are typically oblivious to the nature of the tasks they are scheduling. For instance, they do not know or take into account how long a task will take to execute or how many subtasks it will spawn. Moreover, the actual task execution order is typically determined by the underlying task storage data structure, and cannot be changed. There are thus possibilities for optimizing task parallel executions by providing information on specific tasks and their preferred execution order to the scheduling system. We introduce scheduling strategies to enable applications to dynamically provide hints to the task-scheduling system on the nature of specific tasks. Scheduling strategies can be used to independently control both local task execution order as well as steal order. In contrast to conventional scheduling policies that are normally global in scope, strategies allow the scheduler to apply optimizations on individual tasks. This flexibility greatly improves composability as it allows the scheduler to apply different, specific scheduling choices for different parts of applications simultaneously. We present a number of benchmarks that highlight diverse, beneficial effects that can be achieved with scheduling strategies. Some benchmarks (branch-and-bound, single-source shortest path) show that prioritization of tasks can reduce the total amount of work compared to standard work-stealing execution order. For other benchmarks (triangle strip generation) qualitatively better results can be achieved in shorter time. Other optimizations, such as dynamic merging of tasks or stealing of half the work, instead of half the tasks, are also shown to improve performance. Composability is demonstrated by examples that combine different strategies, both within the same kernel (prefix sum) as well as when scheduling multiple kernels (prefix sum and unbalanced tree search)
    • …
    corecore