48 research outputs found

    A Bag-of-Tasks Scheduler Tolerant to Temporal Failures in Clouds

    Full text link
    Cloud platforms have emerged as a prominent environment to execute high performance computing (HPC) applications providing on-demand resources as well as scalability. They usually offer different classes of Virtual Machines (VMs) which ensure different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in Amazon EC2 cloud, the user pays per hour for on-demand VMs while spot VMs are unused instances available for lower price. Despite the monetary advantages, a spot VM can be terminated, stopped, or hibernated by EC2 at any moment. Using both hibernation-prone spot VMs (for cost sake) and on-demand VMs, we propose in this paper a static scheduling for HPC applications which are composed by independent tasks (bag-of-task) with deadline constraints. However, if a spot VM hibernates and it does not resume within a time which guarantees the application's deadline, a temporal failure takes place. Our scheduling, thus, aims at minimizing monetary costs of bag-of-tasks applications in EC2 cloud, respecting its deadline and avoiding temporal failures. To this end, our algorithm statically creates two scheduling maps: (i) the first one contains, for each task, its starting time and on which VM (i.e., an available spot or on-demand VM with the current lowest price) the task should execute; (ii) the second one contains, for each task allocated on a VM spot in the first map, its starting time and on which on-demand VM it should be executed to meet the application deadline in order to avoid temporal failures. The latter will be used whenever the hibernation period of a spot VM exceeds a time limit. Performance results from simulation with task execution traces, configuration of Amazon EC2 VM classes, and VMs market history confirms the effectiveness of our scheduling and that it tolerates temporal failures

    Real-time sidewalk slope calculation through integration of GPS trajectory and image data to assist people with disabilities in navigation

    Get PDF
    People with disabilities face many obstacles in everyday outdoor travels. One of the most notable obstacles is steep slope on sidewalk segments. Current navigation systems/services do not all support map databases with slope attributes and cannot calculate sidewalk slope in real time. In this paper, we present a technique for calculating slopes of sidewalk segments by image data and predict the most suitable route for each individual user through integration with GPS trajectory. In our technique we make use of GPS trajectory data, to identify the sidewalk segment on which the traveler will most probably pass, and images of the identified sidewalk segment. Through edge detection techniques we detect edges of objects, such as buildings, billboards, and walls, in the background. Slope of the segment is then calculated by comparing its line representation in the map with the detected edges. Our experiment result indicates effective calculation of sidewalk slopes

    Strategies for improving the sustainability of data centers via energy mix, energy conservation, and circular energy

    Get PDF
    Information and communication technologies (ICT) are increasingly permeating our daily life and we ever more commit our data to the cloud. Events like the COVID-19 pandemic put an exceptional burden upon ICT. This involves increasing implementation and use of data centers, which increased energy use and environmental impact. The scope of this work is to summarize the present situation on data centers as to environmental impact and opportunities for improvement. First, we introduce the topic, presenting estimated energy use and emissions. Then, we review proposed strategies for energy efficiency and conservation in data centers. Energy uses pertain to power distribution, ICT, and non-ICT equipment (e.g., cooling). Existing and prospected strategies and initiatives in these sectors are identified. Among key elements are innovative cooling techniques, natural resources, automation, low-power electronics, and equipment with extended thermal limits. Research perspectives are identified and estimates of improvement opportunities are mentioned. Finally, we present an overview on existing metrics, regulatory framework, and bodies concerned

    A Hibernation Aware Dynamic Scheduler for Cloud Environments

    Get PDF
    International audienceNowadays, cloud platforms usually offer several types of Virtual Machines (VMs) which have different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in the Amazon EC2 cloud, the user pays per hour for on-demand VMs while spot VMs are unused instances available for a lower price. Despite the monetary advantages, a spot VM can be terminated or hibernated by EC2 at any moment. In this work, we propose the Hibernation-Aware Dynamic Scheduler (HADS), to schedule applications composed of independent tasks (bag-of-tasks) with deadline constraints in both hibernation-prone spot VMs (for cost sake) and on-demand VMs. We also consider the problem of temporal failures, that occurs when a spot VM hibernates, and does not resume within a time that guarantees the application's deadline. Our dynamic scheduling approach aims at minimizing the monetary costs of bag-of-tasks applications execution, respecting its deadline even in the presence of hibernation. It is also able to avoid temporal failures, by using task migration and work-stealing techniques. Experimental results with real executions using Amazon EC2 VMs confirm the effectiveness of our scheduling when compared with on-demand VM only based approaches, in terms of monetary costs and execution times. It is also shown that our strategy can tolerate temporal failures

    TCEQ 2014 Annual Report Volume I: Technical Report

    Get PDF
    The Energy Systems Laboratory (Laboratory), at the Texas A&M Engineering Experiment Station of The Texas A&M University System, in fulfillment of its responsibilities under Texas Health and Safety Code Ann. § 388.003 (e), submits its annual report, Energy Efficiency/Renewable Energy (EE/RE) Impact in the Texas Emissions Reduction Plan (TERP) to the Texas Commission on Environmental Quality

    A reference model for integrated energy and power management of HPC systems

    Get PDF
    Optimizing a computer for highest performance dictates the efficient use of its limited resources. Computers as a whole are rather complex. Therefore, it is not sufficient to consider optimizing hardware and software components independently. Instead, a holistic view to manage the interactions of all components is essential to achieve system-wide efficiency. For High Performance Computing (HPC) systems, today, the major limiting resources are energy and power. The hardware mechanisms to measure and control energy and power are exposed to software. The software systems using these mechanisms range from firmware, operating system, system software to tools and applications. Efforts to improve energy and power efficiency of HPC systems and the infrastructure of HPC centers achieve perpetual advances. In isolation, these efforts are unable to cope with the rising energy and power demands of large scale systems. A systematic way to integrate multiple optimization strategies, which build on complementary, interacting hardware and software systems is missing. This work provides a reference model for integrated energy and power management of HPC systems: the Open Integrated Energy and Power (OIEP) reference model. The goal is to enable the implementation, setup, and maintenance of modular system-wide energy and power management solutions. The proposed model goes beyond current practices, which focus on individual HPC centers or implementations, in that it allows to universally describe any hierarchical energy and power management systems with a multitude of requirements. The model builds solid foundations to be understandable and verifiable, to guarantee stable interaction of hardware and software components, for a known and trusted chain of command. This work identifies the main building blocks of the OIEP reference model, describes their abstract setup, and shows concrete instances thereof. A principal aspect is how the individual components are connected, interface in a hierarchical manner and thus can optimize for the global policy, pursued as a computing center's operating strategy. In addition to the reference model itself, a method for applying the reference model is presented. This method is used to show the practicality of the reference model and its application. For future research in energy and power management of HPC systems, the OIEP reference model forms a cornerstone to realize --- plan, develop and integrate --- innovative energy and power management solutions. For HPC systems themselves, it supports to transparently manage current systems with their inherent complexity, it allows to integrate novel solutions into existing setups, and it enables to design new systems from scratch. In fact, the OIEP reference model represents a basis for holistic efficient optimization.Computer auf höchstmögliche Rechenleistung zu optimieren bedingt Effizienzmaximierung aller limitierenden Ressourcen. Computer sind komplexe Systeme. Deshalb ist es nicht ausreichend, Hardware und Software isoliert zu betrachten. Stattdessen ist eine Gesamtsicht des Systems notwendig, um die Interaktionen aller Einzelkomponenten zu organisieren und systemweite Optimierungen zu ermöglichen. Für Höchstleistungsrechner (HLR) ist die limitierende Ressource heute ihre Leistungsaufnahme und der resultierende Gesamtenergieverbrauch. In aktuellen HLR-Systemen sind Energie- und Leistungsaufnahme programmatisch auslesbar als auch direkt und indirekt steuerbar. Diese Mechanismen werden in diversen Softwarekomponenten von Firmware, Betriebssystem, Systemsoftware bis hin zu Werkzeugen und Anwendungen genutzt und stetig weiterentwickelt. Durch die Komplexität der interagierenden Systeme ist eine systematische Optimierung des Gesamtsystems nur schwer durchführbar, als auch nachvollziehbar. Ein methodisches Vorgehen zur Integration verschiedener Optimierungsansätze, die auf komplementäre, interagierende Hardware- und Softwaresysteme aufbauen, fehlt. Diese Arbeit beschreibt ein Referenzmodell für integriertes Energie- und Leistungsmanagement von HLR-Systemen, das „Open Integrated Energy and Power (OIEP)“ Referenzmodell. Das Ziel ist ein Referenzmodell, dass die Entwicklung von modularen, systemweiten energie- und leistungsoptimierenden Sofware-Verbunden ermöglicht und diese als allgemeines hierarchisches Managementsystem beschreibt. Dies hebt das Modell von bisherigen Ansätzen ab, welche sich auf Einzellösungen, spezifischen Software oder die Bedürfnisse einzelner Rechenzentren beschränken. Dazu beschreibt es Grundlagen für ein planbares und verifizierbares Gesamtsystem und erlaubt nachvollziehbares und sicheres Delegieren von Energie- und Leistungsmanagement an Untersysteme unter Aufrechterhaltung der Befehlskette. Die Arbeit liefert die Grundlagen des Referenzmodells. Hierbei werden die Einzelkomponenten der Software-Verbunde identifiziert, deren abstrakter Aufbau sowie konkrete Instanziierungen gezeigt. Spezielles Augenmerk liegt auf dem hierarchischen Aufbau und der resultierenden Interaktionen der Komponenten. Die allgemeine Beschreibung des Referenzmodells erlaubt den Entwurf von Systemarchitekturen, welche letztendlich die Effizienzmaximierung der Ressource Energie mit den gegebenen Mechanismen ganzheitlich umsetzen können. Hierfür wird ein Verfahren zur methodischen Anwendung des Referenzmodells beschrieben, welches die Modellierung beliebiger Energie- und Leistungsverwaltungssystemen ermöglicht. Für Forschung im Bereich des Energie- und Leistungsmanagement für HLR bildet das OIEP Referenzmodell Eckstein, um Planung, Entwicklung und Integration von innovativen Lösungen umzusetzen. Für die HLR-Systeme selbst unterstützt es nachvollziehbare Verwaltung der komplexen Systeme und bietet die Möglichkeit, neue Beschaffungen und Entwicklungen erfolgreich zu integrieren. Das OIEP Referenzmodell bietet somit ein Fundament für gesamtheitliche effiziente Systemoptimierung

    A dynamic task scheduler tolerant to multiple hibernations in cloud environments

    Get PDF
    International audienceCloud platforms usually offer several types of Virtual Machines (VMs) with different guarantees in terms of availability and volatility, provisioning the same resource through multiple pricing models. For instance, in the Amazon EC2 cloud, the user pays per use for on-demand VMs while spot VMs are instances available at lower prices. However, a spot VM can be terminated or hibernated by EC2 at any moment. In this work, we propose the Hibernation-Aware Dynamic Scheduler (HADS) that schedules Bag-of-Tasks (BoT) applications with deadline constraints in both hibernation prone spots VMs and on-demand VMs. HADS aims at minimizing the monetary costs of executing BoT applications on Clouds ensuring that their deadlines are respected even in the presence of multiple hibernations. Results collected from experiments on Amazon EC2 VMs using synthetic applications and a NAS benchmark application show the effectiveness of HADS in terms of monetary costs when compared to on-demand VM only solutions

    Alternative Energy Sources

    Get PDF
    The search for alternative sources of energy is an attempt to solve two of the main problems facing the modern world. Today's resources are mainly based on fossil flammable substances such as coal, oil, and natural gas. The first problem is related to the expected and observed depletion of deposits, not only those available but also less accessible. Another is related to global warming from emissions of greenhouse gases (mainly carbon dioxide) as well as emissions of other pollutants in the atmosphere. Mitigating the harmful effects of fossil fuel use is an obvious challenge for mankind. This Special Issue includes articles on the search for new raw materials and new technologies for obtaining energy, such as those existing in nature, methane hydrates, biomass, etc., new more efficient technologies for generating electricity, as well as analyses of the possibilities and conditions of use of these resources for practical applications

    Aportaciones al modelado del cálculo del WCET en entornos de memoria cache

    Get PDF
    Los sistemas de tiempo real cobran cada vez más importancia en numerosas áreas. Para lograr una buena planificación de estos sistemas se requiere un análisis preciso y seguro del peor caso de tiempo de ejecución (WCET) siendo el análisis de la jerarquía de memoria uno de los principales desafíos. En este trabajo nos centramos en mejorar la eficiencia de la jerarquía de memoria en los sistemas de tiempo realestricto en cuanto a su predictibilidad aunque también se consideran otros aspectos como el consumo energético.Este propósito se alcanza reduciendo tanto la cota del WCET como su tiempo de análisis y estudiando patrones de acceso a memoria en tareas relevantes en sistemas de tiempo real.Comenzamos analizando el impacto de la cache de instrucciones en el WCET, centrándonos en el método Lock-MS de análisis del WCET. A fin de usar este método diseñamos el algoritmo necesario para transformar el grafo de control del flujo del binario en una estructura en árbol. Este algoritmo reduce el tiempo de análisis del WCET sin perder precisión para una cache de instrucciones bloqueable. Proponemos una heurística de bloqueo dinámico basada en bucles que aplicada a este método permite obtener el contenido óptimo de cache para el WCET en cada una de las regiones determinadas por la heurística. Además de reducir el WCET, ya que explota el reuso temporal, también reduce su tiempo de análisis.A continuación, ampliamos el estudio del análisis del WCET considerando las instrucciones resultantes de la vectorización automática. Detectamos que la vectorización del código puede ser una buena opción para reducir de manera efectiva el WCET si ésta se lleva a cabo en aquellos bucles que concentran la mayor parte deltiempo ejecución. Por tanto, es conveniente invertir tiempo y recursos en una buena vectorización del código en el contexto de los sistemas de tiempo real.Para finalizar, centramos nuestro estudio en el impacto de la cache de datos estudiando el patrón de acceso a datos en la transposición de matrices y acotando su tasa ideal de aciertos en su versión tiling. De este estudio obtenemos unas expresiones con respecto a los parámetros de cache que garantizan que se alcanzará la tasaideal de aciertos. Específicamente, cuando la dimensión del tile es igual al tamaño de línea de cache la tasa ideal de aciertos se alcanza con muy pocos conjuntos y tan solo dos vías en una cache asociativa por conjuntos. Además, comparamos nuestros resultados con un algoritmo de la transpuesta «indiferente» a los parámetros de lacache (oblivious).<br /
    corecore