525 research outputs found
Control techniques for thermal-aware energy-efficient real time multiprocessor scheduling
La utilización de microprocesadores multinúcleo no sólo es atractiva para la industria sino que en muchos ámbitos es la única opción. La planificación tiempo real sobre estas plataformas es mucho más compleja que sobre monoprocesadores y en general empeoran el problema de sobre-diseño, llevando a la utilización de muchos más procesadores /núcleos de los necesarios. Se han propuesto algoritmos basados en planificación fluida que optimizan la utilización de los procesadores, pero hasta el momento presentan en general inconvenientes que los alejan de su aplicación práctica, no siendo el menor el elevado número de cambios de contexto y migraciones.Esta tesis parte de la hipótesis de que es posible diseñar algoritmos basados en planificación fluida, que optimizan la utilización de los procesadores, cumpliendo restricciones temporales, térmicas y energéticas, con un bajo número de cambios de contexto y migraciones, y compatibles tanto con la generación fuera de línea de ejecutivos cíclicos atractivos para la industria, como de planificadores que integran técnicas de control en tiempo de ejecución que permiten la gestión eficiente tanto de tareas aperiódicas como de desviaciones paramétricas o pequeñas perturbaciones.A este respecto, esta tesis contribuye con varias soluciones. En primer lugar, mejora una metodología de modelo que representa todas las dimensiones del problema bajo un único formalismo (Redes de Petri Continuas Temporizadas). En segundo lugar, propone un método de generación de un ejecutivo cíclico, calculado en ciclos de procesador, para un conjunto de tareas tiempo real duro sobre multiprocesadores que optimiza la utilización de los núcleos de procesamiento respetando también restricciones térmicas y de energía, sobre la base de una planificación fluida. Considerar la sobrecarga derivada del número de cambios de contexto y migraciones en un ejecutivo cíclico plantea un dilema de causalidad: el número de cambios de contexto (y en consecuencia su sobrecarga) no se conoce hasta generar el ejecutivo cíclico, pero dicho número no se puede minimizar hasta que se ha calculado. La tesis propone una solución a este dilema mediante un método iterativo de convergencia demostrada que logra minimizar la sobrecarga mencionada.En definitiva, la tesis consigue explotar la idea de planificación fluida para maximizar la utilización (donde maximizar la utilización es un gran problema en la industria) generando un sencillo ejecutivo cíclico de mínima sobrecarga (ya que la sobrecarga implica un gran problema de los planificadores basados en planificación fluida).Finalmente, se propone un método para utilizar las referencias de la planificación fuera de línea establecida en el ejecutivo cíclico para su seguimiento por parte de un controlador de frecuencia en línea, de modo que se pueden afrontar pequeñas perturbaciones y variaciones paramétricas, integrando la gestión de tareas aperiódicas (tiempo real blando) mientras se asegura la integridad de la ejecución del conjunto de tiempo real duro.Estas aportaciones constituyen una novedad en el campo, refrendada por las publicaciones derivadas de este trabajo de tesis.<br /
Scheduling Storms and Streams in the Cloud
Motivated by emerging big streaming data processing paradigms (e.g., Twitter
Storm, Streaming MapReduce), we investigate the problem of scheduling graphs
over a large cluster of servers. Each graph is a job, where nodes represent
compute tasks and edges indicate data-flows between these compute tasks. Jobs
(graphs) arrive randomly over time, and upon completion, leave the system. When
a job arrives, the scheduler needs to partition the graph and distribute it
over the servers to satisfy load balancing and cost considerations.
Specifically, neighboring compute tasks in the graph that are mapped to
different servers incur load on the network; thus a mapping of the jobs among
the servers incurs a cost that is proportional to the number of "broken edges".
We propose a low complexity randomized scheduling algorithm that, without
service preemptions, stabilizes the system with graph arrivals/departures; more
importantly, it allows a smooth trade-off between minimizing average
partitioning cost and average queue lengths. Interestingly, to avoid service
preemptions, our approach does not rely on a Gibbs sampler; instead, we show
that the corresponding limiting invariant measure has an interpretation
stemming from a loss system.Comment: 14 page
3E: Energy-Efficient Elastic Scheduling for Independent Tasks in Heterogeneous Computing Systems
Reducing energy consumption is a major design constraint for modern heterogeneous computing systems to minimize electricity cost, improve system reliability and protect environment. Conventional energy-efficient scheduling strategies developed on these systems do not sufficiently exploit the system elasticity and adaptability for maximum energy savings, and do not simultaneously take account of user expected finish time. In this paper, we develop a novel scheduling strategy named energy-efficient elastic (3E) scheduling for aperiodic, independent and non-real-time tasks with user expected finish times on DVFS-enabled heterogeneous computing systems. The 3E strategy adjusts processors’ supply voltages and frequencies according to the system workload, and makes trade-offs between energy consumption and user expected finish times. Compared with other energy-efficient strategies, 3E significantly improves the scheduling quality and effectively enhances the system elasticity
Eventually-Consistent Federated Scheduling for Data Center Workloads
Data center schedulers operate at unprecedented scales today to accommodate
the growing demand for computing and storage power. The challenge that
schedulers face is meeting the requirements of scheduling speeds despite the
scale. To do so, most scheduler architectures use parallelism. However, these
architectures consist of multiple parallel scheduling entities that can only
utilize partial knowledge of the data center's state, as maintaining consistent
global knowledge or state would involve considerable communication overhead.
The disadvantage of scheduling without global knowledge is sub-optimal
placements-tasks may be made to wait in queues even though there are resources
available in zones outside the scope of the scheduling entity's state. This
leads to unnecessary queuing overheads and lower resource utilization of the
data center. In this paper, extend our previous work on Megha, a federated
decentralized data center scheduling architecture that uses eventual
consistency. The architecture utilizes both parallelism and an
eventually-consistent global state in each of its scheduling entities to make
fast decisions in a scalable manner. In our work, we compare Megha with 3
scheduling architectures: Sparrow, Eagle, and Pigeon, using simulation. We also
evaluate Megha's prototype on a 123-node cluster and compare its performance
with Pigeon's prototype using cluster traces. The results of our experiments
show that Megha consistently reduces delays in job completion time when
compared to other architectures.Comment: 26 pages. Submitted to Elsevier's Ad Hoc Networks Journa
Comparison of Batch Scheduling for Identical Multi-Tasks Jobs on Heterogeneous Platforms
International audienceIn this paper we consider the scheduling of a batch of the same job on a heterogeneous execution platform. A job is represented by a directed acyclic graph without forks (intree) but with typed tasks. The execution resources are distributed and each resource can carry out a set of task types. The objective function is to minimize the makespan of the batch execution. Three algorithms are studied in this context: an on-line algorithm, a genetic algorithm and a steady-state algorithm. The contribution of this paper is on the experimental analysis of these algorithms and on their adaptation to the context. We show that their performances depend on the size of the batch and on the characteristics of the execution platform
Design and development of deadline based scheduling mechanisms for multiprocessor systems
Multiprocessor systems are nowadays de facto standard for both personal
computers and server workstations. Benefits of multicore technology will be
used in the next few years for embedded devices and cellular phones as well.
Linux, as a General Purpose Operating System (GPOS), must support many
different hardware platform, from workstations to mobile devices. Unfortu-
nately, Linux has not been designed to be a Real-Time Operating System
(RTOS). As a consequence, time-sensitive (e.g. audio/video players) or sim-
ply real-time interactive applications, may suffer degradations in their QoS.
In this thesis we extend the implementation of the “Earliest Deadline First”
algorithm in the Linux kernel from single processor to multicore systems,
allowing processes migration among the CPUs. We also discuss the design
choices and present the experimental results that show the potential of our
work
- …