66 research outputs found

    Performance modelling with adaptive hidden Markov models and discriminatory processor sharing queues

    Get PDF
    In modern computer systems, workload varies at different times and locations. It is important to model the performance of such systems via workload models that are both representative and efficient. For example, model-generated workloads represent realistic system behaviour, especially during peak times, when it is crucial to predict and address performance bottlenecks. In this thesis, we model performance, namely throughput and delay, using adaptive models and discrete queues. Hidden Markov models (HMMs) parsimoniously capture the correlation and burstiness of workloads with spatiotemporal characteristics. By adapting the batch training of standard HMMs to incremental learning, online HMMs act as benchmarks on workloads obtained from live systems (i.e. storage systems and financial markets) and reduce time complexity of the Baum-Welch algorithm. Similarly, by extending HMM capabilities to train on multiple traces simultaneously it follows that workloads of different types are modelled in parallel by a multi-input HMM. Typically, the HMM-generated traces verify the throughput and burstiness of the real data. Applications of adaptive HMMs include predicting user behaviour in social networks and performance-energy measurements in smartphone applications. Equally important is measuring system delay through response times. For example, workloads such as Internet traffic arriving at routers are affected by queueing delays. To meet quality of service needs, queueing delays must be minimised and, hence, it is important to model and predict such queueing delays in an efficient and cost-effective manner. Therefore, we propose a class of discrete, processor-sharing queues for approximating queueing delay as response time distributions, which represent service level agreements at specific spatiotemporal levels. We adapt discrete queues to model job arrivals with distributions given by a Markov-modulated Poisson process (MMPP) and served under discriminatory processor-sharing scheduling. Further, we propose a dynamic strategy of service allocation to minimise delays in UDP traffic flows whilst maximising a utility function.Open Acces

    Computational Sprinting: Exceeding Sustainable Power in Thermally Constrained Systems

    Get PDF
    Although process technology trends predict that transistor sizes will continue to shrink for a few more generations, voltage scaling has stalled and thus future chips are projected to be increasingly more power hungry than previous generations. Particularly in mobile devices which are severely cooling constrained, it is estimated that the peak operation of a future chip could generate heat ten times faster than than the device can sustainably vent. However, many mobile applications do not demand sustained performance; rather they comprise short bursts of computation in response to sporadic user activity. To improve responsiveness for such applications, this dissertation proposes computational sprinting, in which a system greatly exceeds sustainable power margins (by up to 10Ã?) to provide up to a few seconds of high-performance computation when a user interacts with the device. Computational sprinting exploits the material property of thermal capacitance to temporarily store the excess heat generated when sprinting. After sprinting, the chip returns to sustainable power levels and dissipates the stored heat when the system is idle. This dissertation: (i) broadly analyzes thermal, electrical, hardware, and software considerations to analyze the feasibility of engineering a system which can provide the responsiveness of a plat- form with 10Ã? higher sustainable power within today\u27s cooling constraints, (ii) leverages existing sources of thermal capacitance to demonstrate sprinting on a real system today, and (iii) identifies the energy-performance characteristics of sprinting operation to determine runtime sprint pacing policies

    Decentralising resource management in operating systems

    Get PDF
    This dissertation explores operating system mechanisms to allow resource-aware applications to be involved in the process of managing resources under the premise that these applications (1) potentially have some (implicit) notion of their future resource demands and (2) can adapt their resource demands. The general idea is to provide feedback to resource-aware applications so that they can proactively participate in the management of resources. This approach has the benefit that resource management policies can be removed from central entities and the operating system has only to provide mechanism. Furthermore, in contrast to centralised approaches, application specific features can be more easily exploited. To achieve this aim, I propose to deploy a microeconomic theory, namely congestion or shadow pricing, which has recently received attention for managing congestion in communication networks. Applications are charged based on the potential "damage" they cause to other consumers by using resources. Consumers interpret these congestion charges as feedback signals which they use to adjust their resource consumption. It can be shown theoretically that such a system with consumers merely acting in their own self-interest will converge to a social optimum. This dissertation focuses on the operating system mechanisms required to decentralise resource management this way. In particular it identifies four mechanisms: pricing & charging, credit accounting, resource usage accounting, and multiplexing. While the latter two are mechanisms generally required for the accurate management of resources, pricing & charging and credit accounting present novel mechanisms. It is argued that congestion prices are the correct economic model in this context and provide appropriate feedback to applications. The credit accounting mechanism is necessary to ensure the overall stability of the system by assigning value to credits

    New techniques to model energy-aware I/O architectures based on SSD and hard disk drives

    Get PDF
    For years, performance improvements at the computer I/O subsystem and at other subsystems have advanced at their own pace, being less the improvements at the I/O subsystem, and making the overall system speed dependant of the I/O subsystem speed. One of the main factors for this imbalance is the inherent nature of disk drives, which has allowed big advances in disk densities, but not so many in disk performance. Thus, to improve I/O subsystem performance, disk drives have become a goal of study for many researchers, having to use, in some cases, different kind of models. Other research studies aim to improve I/O subsystem performance by tuning more abstract I/O levels. Since disk drives lay behind those levels, real disk drives or just models need to be used. One of the most common techniques to evaluate the performance of a computer I/O subsystem is found on detailed simulation models including specific features of storage devices like disk geometry, zone splitting, caching, read-ahead buffers and request reordering. However, as soon as a new technological innovation is added, those models need to be reworked to include new characteristics, making difficult to have general models up to date. Our alternative is modeling a storage device as a black-box probabilistic model, where the storage device itself, its interface and the interconnection mechanisms are modeled as a single stochastic process, defining the service time as a random variable with an unknown distribution. This approach allows generating disk service times needing less computational power by means of a variate generator included in a simulator. This approach allows to reach a greater scalability in I/O subsystems performance evaluations by means of simulation. Lately, energy saving for computing systems has become an important need. In mobile computers, the battery life is limited to a certain amount of time, and not wasting energy at certain parts would extend the usage of the computer. Here, again the computer I/O subsystem has pointed out as field of study, because disk drives, which are a main part of it, are one of the most power consuming elements due to their mechanical nature. In server or enterprise computers, where the number of disks increase considerably, power saving may reduce cooling requirements for heat dissipation and thus, great monetary costs. This dissertation also considers the question of saving energy in the disk drive, by making advantage of diverse devices in hybrid storage systems, composed of Solid State Disks (SSDs) and Disk drives. SSDs and Disk drives offer different power characteristics, being SSDs much less power consuming than disk drives. In this thesis, several techniques that use SSDs as supporting devices for Disk drives, are proposed. Various options for managing SSDs and Disk devices in such hybrid systems are examinated, and it is shown that the proposed methods save energy and monetary costs in diverse scenarios. A simulator composed of Disks and SSD devices was implemented. This thesis studies the design and evaluation of the proposed approaches with the help of realistic workloads. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Durante años, las mejoras de rendimiento en el subsistema de E/S del ordenador y en otros subsistemas han avanzado a su propio ritmo, siendo menores las mejoras en el subsistema de E/S, y provocando que la velocidad global del sistema dependa de la velocidad del subsistema de E/S. Uno de los factores principales de este desequilibrio es la naturaleza inherente de las unidades de disco, la cual que ha permitido grandes avances en las densidades de disco, pero no así en su rendimiento. Por lo tanto, para mejorar el rendimiento del subsistema de E/S, las unidades de disco se han convertido en objetivo de estudio para muchos investigadores, que se ven obligados a utilizar, en algunos casos, diferentes tipos de modelos o simuladores. Otros estudios de investigación tienen como objetivo mejorar el rendimiento del subsistema de E/S, estudiando otros niveles más abstractos. Como los dispositivos de disco siguen estando detrás de esos niveles, tanto discos reales como modelos pueden usarse para esos estudios. Una de las técnicas más comunes para evaluar el rendimiento del subsistema de E/S de un ordenador se ha encontrado en los modelos de simulación detallada, los cuales modelan características específicas de los dispositivos de almacenamiento como la geometría del disco, la división en zonas, el almacenamiento en caché, el comportamiento de los buffers de lectura anticipada y la reordenación de solicitudes. Sin embargo, cuando se agregan innovaciones tecnológicas, los modelos tienen que ser revisados a fin de incluir nuevas características que incorporen dichas innovaciones, y esto hace difícil el tener modelos generales actualizados. Nuestra alternativa es el modelado de un dispositivo de almacenamiento como un modelo probabilístico de caja negra, donde el dispositivo de almacenamiento en sí, su interfaz y sus mecanismos de interconexión se tratan como un proceso estocástico, definiendo el tiempo de servicio como una variable aleatoria con una distribución desconocida. Este enfoque permite la generación de los tiempos de servicio del disco, de forma que se necesite menos potencia de cálculo a través del uso de un generador de variable aleatoria incluido en un simulador. De este modo, se permite alcanzar una mayor escalabilidad en la evaluación del rendimiento del subsistema de E/S a través de la simulación. En los últimos años, el ahorro de energía en los sistemas de computación se ha convertido en una necesidad importante. En ordenadores portátiles, la duración de la batería se limita a una cierta cantidad de tiempo, y no desperdiciar energía en ciertas partes haría más largo el uso del ordenador. Aquí, de nuevo el subsistema de E/S se señala como campo de estudio, ya que las unidades de disco, que son una parte principal del mismo, son uno de los elementos de más consumo de energía debido a su naturaleza mecánica. En los equipos de servidor o de empresa, donde el número de discos aumenta considerablemente, el ahorro de energía puede reducir las necesidades de refrigeración para la disipación de calor y por lo tanto, grandes costes monetarios. Esta tesis también considera la cuestión del ahorro energético en la unidad de disco, haciendo uso de diversos dispositivos en sistemas de almacenamiento híbridos, que emplean discos de estado sólido (SSD) y unidades de disco. Las SSD y unidades de disco ofrecen diferentes características de potencia, consumiendo las SSDs menos energía que las unidades de disco. En esta tesis se proponen varias técnicas que utilizan los SSD como apoyo a los dispositivos de disco. Se examinan las diversas opciones para la gestión de las SSD y los dispositivos de disco en tales sistemas híbridos, y se muestra que los métodos propuestos ahorran energía y costes monetarios en diversos escenarios. Se ha implementado un simulador compuesto por discos y dispositivos SSD. Esta tesis estudia el diseño y evaluación de los enfoques propuestos con la ayuda de las cargas de trabajo reales
    corecore