6 research outputs found

    RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments

    Full text link
    Resource sharing between multiple workloads has become a prominent practice among cloud service providers, motivated by demand for improved resource utilization and reduced cost of ownership. Effective resource sharing, however, remains an open challenge due to the adverse effects that resource contention can have on high-priority, user-facing workloads with strict Quality of Service (QoS) requirements. Although recent approaches have demonstrated promising results, those works remain largely impractical in public cloud environments since workloads are not known in advance and may only run for a brief period, thus prohibiting offline learning and significantly hindering online learning. In this paper, we propose RAPID, a novel framework for fast, fully-online resource allocation policy learning in highly dynamic operating environments. RAPID leverages lightweight QoS predictions, enabled by domain-knowledge-inspired techniques for sample efficiency and bias reduction, to decouple control from conventional feedback sources and guide policy learning at a rate orders of magnitude faster than prior work. Evaluation on a real-world server platform with representative cloud workloads confirms that RAPID can learn stable resource allocation policies in minutes, as compared with hours in prior state-of-the-art, while improving QoS by 9.0x and increasing best-effort workload performance by 19-43%

    QoS Management on Heterogeneous Architecture for Parallel Applications

    Get PDF
    Abstract-Quality of service (QoS) management is widely employed to provide differentiable performance to programs with distinctive priorities on conventional chip multi-processor (CMP) platforms. Recently, heterogeneous architecture integrating diverse processor cores on the same silicon has been proposed to better serve various application domains and it is expected to be an important design paradigm of future processors. Therefore, the QoS management on emerging heterogeneous systems will be of great significance. On the other hand, parallel applications are becoming increasingly important in modern computing community in order to explore the benefit of thread-level parallelism on CMPs. However, considering the diverse characteristics of thread synchronization, data sharing, and parallelization pattern, governing the execution of multiple parallel programs with different performance requirements becomes a complicated yet significant problem. In this paper, we study QoS management for parallel applications running on heterogeneous CMP systems. We comprehensively assess a series of task-to-core mapping policies on a real heterogeneous hardware (QuickIA) by characterizing their impacts on performance of individual applications. Our evaluation results show that the proposed QoS policies are effective to improve the performance of programs with highest priority while striking good tradeoff with system fairness

    Using SMT to accelerate nested virtualization

    Get PDF
    IaaS datacenters offer virtual machines (VMs) to their clients, who in turn sometimes deploy their own virtualized environments, thereby running a VM inside a VM. This is known as nested virtualization. VMs are intrinsically slower than bare-metal execution, as they often trap into their hypervisor to perform tasks like operating virtual I/O devices. Each VM trap requires loading and storing dozens of registers to switch between the VM and hypervisor contexts, thereby incurring costly runtime overheads. Nested virtualization further magnifies these overheads, as every VM trap in a traditional virtualized environment triggers at least twice as many traps. We propose to leverage the replicated thread execution resources in simultaneous multithreaded (SMT) cores to alleviate the overheads of VM traps in nested virtualization. Our proposed architecture introduces a simple mechanism to colocate different VMs and hypervisors on separate hardware threads of a core, and replaces the costly context switches of VM traps with simple thread stall and resume events. More concretely, as each thread in an SMT core has its own register set, trapping between VMs and hypervisors does not involve costly context switches, but simply requires the core to fetch instructions from a different hardware thread. Furthermore, our inter-thread communication mechanism allows a hypervisor to directly access and manipulate the registers of its subordinate VMs, given that they both share the same in-core physical register file. A model of our architecture shows up to 2.3× and 2.6× better I/O latency and bandwidth, respectively. We also show a software-only prototype of the system using existing SMT architectures, with up to 1.3× and 1.5× better I/O latency and bandwidth, respectively, and 1.2--2.2× speedups on various real-world applications

    PIRATE

    No full text

    SARIM PLUS—sample return of comet 67P/CG and of interstellar matter

    Get PDF
    The Stardust mission returned cometary, interplanetary and (probably) interstellar dust in 2006 to Earth that have been analysed in Earth laboratories worldwide. Results of this mission have changed our view and knowledge on the early solar nebula. The Rosetta mission is on its way to land on comet 67P/Churyumov-Gerasimenko and will investigate for the first time in great detail the comet nucleus and its environment starting in 2014. Additional astronomy and planetary space missions will further contribute to our understanding of dust generation, evolution and destruction in interstellar and interplanetary space and provide constraints on solar system formation and processes that led to the origin of life on Earth. One of these missions, SARIM-PLUS, will provide a unique perspective by measuring interplanetary and interstellar dust with high accuracy and sensitivity in our inner solar system between 1 and 2 AU. SARIM-PLUS employs latest in-situ techniques for a full characterisation of individual micrometeoroids (flux, mass, charge, trajectory, composition) and collects and returns these samples to Earth for a detailed analysis. The opportunity to visit again the target comet of the Rosetta mission 67P/Churyumov-Gerasimeenternko, and to investigate its dusty environment six years after Rosetta with complementary methods is unique and strongly enhances and supports the scientific exploration of this target and the entire Rosetta mission. Launch opportunities are in 2020 with a backup window starting early 2026. The comet encounter occurs in September 2021 and the reentry takes place in early 2024. An encounter speed of 6 km/s ensures comparable results to the Stardust mission
    corecore