7 research outputs found

    Scalability in the Presence of Variability

    Get PDF
    Supercomputers are used to solve some of the world’s most computationally demanding problems. Exascale systems, to be comprised of over one million cores and capable of 10^18 floating point operations per second, will probably exist by the early 2020s, and will provide unprecedented computational power for parallel computing workloads. Unfortunately, while these machines hold tremendous promise and opportunity for applications in High Performance Computing (HPC), graph processing, and machine learning, it will be a major challenge to fully realize their potential, because to do so requires balanced execution across the entire system and its millions of processing elements. When different processors take different amounts of time to perform the same amount of work, performance imbalance arises, large portions of the system sit idle, and time and energy are wasted. Larger systems incorporate more processors and thus greater opportunity for imbalance to arise, as well as larger performance/energy penalties when it does. This phenomenon is referred to as performance variability and is the focus of this dissertation. In this dissertation, we explain how to design system software to mitigate variability on large scale parallel machines. Our approaches span (1) the design, implementation, and evaluation of a new high performance operating system to reduce some classes of performance variability, (2) a new performance evaluation framework to holistically characterize key features of variability on new and emerging architectures, and (3) a distributed modeling framework that derives predictions of how and where imbalance is manifesting in order to drive reactive operations such as load balancing and speed scaling. Collectively, these efforts provide a holistic set of tools to promote scalability through the mitigation of variability

    Real-time systems on multicore platforms: managing hardware resources for predictable execution

    Full text link
    Shared hardware resources in commodity multicore processors are subject to contention from co-running threads. The resultant interference can lead to highly-variable performance for individual applications. This is particularly problematic for real-time applications, which require predictable timing guarantees. It also leads to a pessimistic estimate of the Worst Case Execution Time (WCET) for every real-time application. More CPU time needs to be reserved, thus less applications can enter the system. As the average execution time is usually far less than the WCET, a significant amount of reserved CPU resource would be wasted. Previous works have attempted partitioning the shared resources, amongst either CPUs or processes, to improve performance isolation. However, they have not proven to be both efficient and effective. In this thesis, we propose several mechanisms and frameworks that manage the shared caches and memory buses on multicore platforms. Firstly, we introduce a multicore real-time scheduling framework with the foreground/background scheduling model. Combining real-time load balancing with background scheduling, CPU utilization is greatly improved. Besides, a memory bus management mechanism is implemented on top of the background scheduling, making sure bus contention is under control while utilizing unused CPU cycles. Also, cache partitioning is thoroughly studied in this thesis, with a cache-aware load balancing algorithm and a dynamic cache partitioning framework proposed. Lastly, we describe a system architecture to integrate the above solutions all together. It tackles one of the toughest problems in OS innovation, legacy support, by converting existing OSes into libraries in a virtualized environment. Thus, within a single multicore platform, we benefit from the fine-grained resource control of a real-time OS and the richness of functionality of a general-purpose OS

    Customization and reuse in datacenter operating systems

    Get PDF
    Increasingly, computing has moved to large-scale datacenters where application performance is critical. Stagnating CPU clock speeds coupled with increasingly higher bandwidth and lower latency networking and storage puts an increased focus on the operating system to enable high-performance. The challenge of providing high-performance is made more difficult due to the diversity of datacenter workloads such as search, video processing, distributed storage, and machine learning tasks. Our existing general purpose operating systems must sacrifice the performance of any one application in order to support a broad set of applications. We observe that a common model for application deployment is to dedicate a physical or virtual machine to a single application. In this context, our operating systems can be specialized to the purposes of the application. In this dissertation, we explore the design of the Elastic Building Block Runtime (EbbRT), a framework for constructing high-performance, customizable operating systems while keeping developer effort low. EbbRT adopts a lightweight execution environment which enables applications to directly manage hardware resources and specialize their system behavior. An EbbRT operating system is composed of objects called Elastic Building Blocks (Ebbs) which encapsulate functionality so it can be incrementally extended or optimized. Finally, EbbRT adopts a unique heterogeneous and distributed architecture where an application can be split between a server running an existing general purpose operating system and a server running a customized library operating system. The library operating system provides the mechanisms for application execution including primitives for event driven programming, componentization, memory management and I/O. We demonstrate that EbbRT enables memcached, an in-memory caching server, to achieve more than double the performance with EbbRT than with Linux. We also demonstrate that EbbRT can support more full-featured applications such as a port of Google’s V8 javascript engine and nodejs, a javascript server runtime

    The Murray Ledger and Times, May 19, 1994

    Get PDF

    Tools for Nonlinear Control Systems Design

    Get PDF
    This is a brief statement of the research progress made on Grant NAG2-243 titled "Tools for Nonlinear Control Systems Design", which ran from 1983 till December 1996. The initial set of PIs on the grant were C. A. Desoer, E. L. Polak and myself (for 1983). From 1984 till 1991 Desoer and I were the Pls and finally I was the sole PI from 1991 till the end of 1996. The project has been an unusually longstanding and extremely fruitful partnership, with many technical exchanges, visits, workshops and new avenues of investigation begun on this grant. There were student visits, long term.visitors on the grant and many interesting joint projects. In this final report I will only give a cursory description of the technical work done on the grant, since there was a tradition of annual progress reports and a proposal for the succeeding year. These progress reports cum proposals are attached as Appendix A to this report. Appendix B consists of papers by me and my students as co-authors sorted chronologically. When there are multiple related versions of a paper, such as a conference version and journal version they are listed together. Appendix C consists of papers by Desoer and his students as well as 'solo' publications by other researchers supported on this grant similarly chronologically sorted

    Scalable system software for high performance large-scale applications

    Get PDF
    In the last decades, high-performance large-scale systems have been a fundamental tool for scientific discovery and engineering advances. The sustained growth of supercomputing performance and the concurrent reduction in cost have made this technology available for a large number of scientists and engineers working on many different problems. The design of next-generation supercomputers will include traditional HPC requirements as well as the new requirements to handle data-intensive computations. Data intensive applications will hence play an important role in a variety of fields, and are the current focus of several research trends in HPC. Due to the challenges of scalability and power efficiency, next-generation of supercomputers needs a redesign of the whole software stack. Being at the bottom of the software stack, system software is expected to change drastically to support the upcoming hardware and to meet new application requirements. This PhD thesis addresses the scalability of system software. The thesis start at the Operating System level: first studying general-purpose OS (ex. Linux) and then studying lightweight kernels (ex. CNK). Then, we focus on the runtime system: we implement a runtime system for distributed memory systems that includes many of the system services required by next-generation applications. Finally we focus on hardware features that can be exploited at user-level to improve applications performance, and potentially included into our advanced runtime system. The thesis contributions are the following: Operating System Scalability: We provide an accurate study of the scalability problems of modern Operating Systems for HPC. We design and implement a methodology whereby detailed quantitative information may be obtained for each OS noise event. We validate our approach by comparing it to other well-known standard techniques to analyze OS noise, such FTQ (Fixed Time Quantum). Evaluation of the address translation management for a lightweight kernel: we provide a performance evaluation of different TLB management approaches ¿ dynamic memory mapping, static memory mapping with replaceable TLB entries, and static memory mapping with fixed TLB entries (no TLB misses) on a IBM BlueGene/P system. Runtime System Scalability: We show that a runtime system can efficiently incorporate system services and improve scalability for a specific class of applications. We design and implement a full-featured runtime system and programming model to execute irregular appli- cations on a commodity cluster. The runtime library is called Global Memory and Threading library (GMT) and integrates a locality-aware Partitioned Global Address Space communication model with a fork/join program structure. It supports massive lightweight multi-threading, overlapping of communication and computation and small messages aggregation to tolerate network latencies. We compare GMT to other PGAS models, hand-optimized MPI code and custom architectures (Cray XMT) on a set of large scale irregular applications: breadth first search, random walk and concurrent hash map access. Our runtime system shows performance orders of magnitude higher than other solutions on commodity clusters and competitive with custom architectures. User-level Scalability Exploiting Hardware Features: We show the high complexity of low-level hardware optimizations for single applications, as a motivation to incorporate this logic into an adaptive runtime system. We evaluate the effects of controllable hardware-thread priority mechanism that controls the rate at which each hardware-thread decodes instruction on IBM POWER5 and POWER6 processors. Finally, we show how to effectively exploits cache locality and network-on-chip on the Tilera many-core architecture to improve intra-core scalability
    corecore