61 research outputs found

    Predictable cloud computing

    Get PDF
    The standard tools for cloud computing—processor and network virtualization—make it difficult to achieve dependability, both in terms of real time operations and fault tolerance. Virtualization multiplexes virtual resources onto physical ones, typically by time division or statistical multiplexing. Time, in the virtual machine, is therefore as virtual as the machine itself. And fault tolerance is difficult to achieve when redundancy and independent failure in the virtual environment do not necessarily map to those properties in the physical environment. Virtualization adds a level of indirection that creates overhead, and makes it all but impossible to achieve predictable performance. Osprey uses an alternative to virtualization that achieves the same goals of scalability and flexibility but carries neither the overhead of virtualization, nor the restrictions on dependability. The result is a programming environment that achieves most of the compatibility offered by traditional virtualization efforts and provides much better and much more predictable performance. One technique we use is called Library OS, which stems from high-performance computing. The technique consists of linking applications with a library that implements most services normally provided by the operating system, creating an application that can run practically stand alone, or at least with a very minimal operating system. The Library OS approach moves the boundary between application and operating system down to a level where interactions with the operating system consist of sending/receiving messages (e.g., network packets) and scheduling resources (processor, memory, network bandwidth, and device access). These interactions, as we demonstrate, form a relatively weak bond between an application and the particular instance of the operating system on which it runs—one that can be broken and re-established elsewhere. In fact, we make sure this is the case. Legacy applications that cannot be recompiled or relinked can make use of a Library OS server that runs as a tandem process along with the legacy application processes. System calls from the legacy process are catapulted into the Library OS server which executes them. Applications can still migrate, taking their server process along with them

    Optimization of energy efficiency in data and WEB hosting centers

    Get PDF
    Mención Internacional en el título de doctorThis thesis tackles the optimization of energy efficiency in data centers in terms of network and server utilization. For what concerns networking utilization the work focuses on Energy Efficient Ethernet (EEE) - IEEE 802.3az standard - which is the energy-aware alternative to legacy Ethernet, and an important component of current and future green data centers. More specifically the first contribution of this thesis consists in deriving and analytical model of gigabit EEE links with coalescing using M/G/1 queues with sleep and wake-up periods. Packet coalescing has been proposed to save energy by extending the sojourn in the Low Power Idle state of EEE. The model presented in this thesis approximates with a good accuracy both the energy saving and the average packet delay by using a few significant traffic descriptors. While coalescing improves by far the energy efficiency of EEE, it is still far from achieving energy consumption proportional to traffic. Moreover, coalescing can introduce high delays. To this extend, by using sensitivity analysis the thesis evaluates the impact of coalescing timers and buffer sizes, and sheds light on the delay incurred by adopting coalescing schemes. Accordingly, the design and study of a first family of dynamic algorithms, namely measurement-based coalescing control (MBCC), is proposed. MBCC schemes tune the coalescing parameters on-the-fly, according to the instantaneous load and the coalescing delay experienced by the packets. The thesis also discusses a second family of dynamic algorithms, namely NT-policy coalescing control (NTCC), that adjusts the coalescing parameters based on the sole occurrence of timeouts and buffer fill-ups. Furthermore, the performance of static as well as dynamic coalescing schemes is investigated using real traffic traces. The results reported in this work show that, by relying on run-time delay measurements, simple and practical MBCC adaptive coalescing schemes outperform traditional static and dynamic coalescing while the adoption of NTCC coalescing schemes has practically no advantages with respect to static coalescing when delay guarantees have to be provided. Notably, MBCC schemes double the energy saving benefit of legacy EEE coalescing and allow to control the coalescing delay. For what concerns server utilization, the thesis presents an exhaustive empirical characterization of the power requirements of multiple components of data center servers. The characterization is the second key contribution of this thesis, and is achieved by devising different experiments to stress server components, taking into account the multiple available CPU frequencies and the presence of multicore servers. The described experiments, allow to measure energy consumption of server components and identify their optimal operational points. The study proves that the curve defining the minimal CPU power utilization, as a function of the load expressed in Active Cycles Per Second, is neither concave nor purely convex. Instead, it definitively shows a superlinear dependence on the load. The results illustrate how to improve the efficiency of network cards and disks. Finally, the accuracy of the model derived from the server components consumption characterization is validated by comparing the real energy consumed by two Hadoop applications - PageRank and WordCount - with the estimation from the model, obtaining errors below 4:1%, on average.This work has been partially supported by IMDEA Networks Institute and the Greek State Scholarships FoundationPrograma Oficial de Doctorado en Ingeniería TelemáticaPresidente: Marco Giuseppe Ajmone Marsan.- Secretario: Jose Luis Ayala Rodrigo.- Vocal: Gianluca Antonio Rizz

    Decentralising resource management in operating systems

    Get PDF
    This dissertation explores operating system mechanisms to allow resource-aware applications to be involved in the process of managing resources under the premise that these applications (1) potentially have some (implicit) notion of their future resource demands and (2) can adapt their resource demands. The general idea is to provide feedback to resource-aware applications so that they can proactively participate in the management of resources. This approach has the benefit that resource management policies can be removed from central entities and the operating system has only to provide mechanism. Furthermore, in contrast to centralised approaches, application specific features can be more easily exploited. To achieve this aim, I propose to deploy a microeconomic theory, namely congestion or shadow pricing, which has recently received attention for managing congestion in communication networks. Applications are charged based on the potential "damage" they cause to other consumers by using resources. Consumers interpret these congestion charges as feedback signals which they use to adjust their resource consumption. It can be shown theoretically that such a system with consumers merely acting in their own self-interest will converge to a social optimum. This dissertation focuses on the operating system mechanisms required to decentralise resource management this way. In particular it identifies four mechanisms: pricing & charging, credit accounting, resource usage accounting, and multiplexing. While the latter two are mechanisms generally required for the accurate management of resources, pricing & charging and credit accounting present novel mechanisms. It is argued that congestion prices are the correct economic model in this context and provide appropriate feedback to applications. The credit accounting mechanism is necessary to ensure the overall stability of the system by assigning value to credits

    The exokernel operating system architecture

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 115-120).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.On traditional operating systems only trusted software such as privileged servers or the kernel can manage resources. This thesis proposes a new approach, the exokernel architecture, which makes resource management unprivileged but safe by separating management from protection: an exokernel protects resources, while untrusted application-level software manages them. As a result, in an exokernel system, untrusted software (e.g., library operating systems) can implement abstractions such as virtual memory, file systems, and networking. Themain thrusts of this thesis are: (1) how to build an exokernel system; (2) whether it is possible to build a real one; and (3) whether doing so is a good idea. Our results, drawn from two exokernel systems [25, 48], show that the approach yields dramatic benefits. For example, Xok, an exokernel, runs a web server an order of magnitude faster than the closest equivalent on the same hardware, common unaltered Unix applications up to three times faster, and improves global system performance up to a factor of five. The thesis also discusses some of the new techniques we have used to remove the overhead of protection. Themost unusual technique, untrusted deterministic functions, enables an exokernel to verify that applications correctly track the resources they own, eliminating the need for it to do so. Additionally, the thesis reflects on the subtle issues in using downloaded code for extensibility and the sometimes painful lessons learned in building three exokernel-based systems.by Dawson R. Engler.Ph.D
    • …
    corecore