455 research outputs found
Joint Cache Partition and Job Assignment on Multi-Core Processors
Multicore shared cache processors pose a challenge for designers of embedded
systems who try to achieve minimal and predictable execution time of workloads
consisting of several jobs. To address this challenge the cache is statically
partitioned among the cores and the jobs are assigned to the cores so as to
minimize the makespan. Several heuristic algorithms have been proposed that
jointly decide how to partition the cache among the cores and assign the jobs.
We initiate a theoretical study of this problem which we call the joint cache
partition and job assignment problem.
By a careful analysis of the possible cache partitions we obtain a constant
approximation algorithm for this problem. For some practical special cases we
obtain a 2-approximation algorithm, and show how to improve the approximation
factor even further by allowing the algorithm to use additional cache. We also
study possible improvements that can be obtained by allowing dynamic cache
partitions and dynamic job assignments.
We define a natural special case of the well known scheduling problem on
unrelated machines in which machines are ordered by "strength". Our joint cache
partition and job assignment problem generalizes this scheduling problem which
we think is of independent interest. We give a polynomial time algorithm for
this scheduling problem for instances obtained by fixing the cache partition in
a practical case of the joint cache partition and job assignment problem where
job loads are step functions
Algorithms for Hierarchical and Semi-Partitioned Parallel Scheduling
We propose a model for scheduling jobs in a parallel machine setting that takes into account the cost of migrations by assuming that the processing time of a job may depend on the specific set of machines among which the job is migrated. For the makespan minimization objective, the model generalizes classical scheduling problems such as unrelated parallel machine scheduling, as well as novel ones such as semi-partitioned and clustered scheduling. In the case of a hierarchical family of machines, we derive a compact integer linear programming formulation of the problem and leverage its fractional relaxation to obtain a polynomial-time 2-approximation algorithm. Extensions that incorporate memory capacity constraints are also discussed
Co-scheduling algorithms for cache-partitioned systems
Cache-partitioned architectures allow subsections of theshared last-level cache (LLC) to be exclusively reserved for someapplications. This technique dramatically limits interactions between applicationsthat are concurrently executing on a multi-core machine. Consider n applications that execute concurrently, with the objective to minimize the makespan, defined as the maximum completion time of the n applications.Key scheduling questions are: (i) which proportionof cache and (ii) how many processors should be given to each application?Here, we assign rational numbers of processors to each application,since they can be shared across applications through multi-threading.In this paper, we provide answers to (i) and (ii) for perfectly parallel applications.Even though the problem is shown to be NP-complete, we give key elements to determinethe subset of applications that should share the LLC(while remaining ones only use their smaller private cache). Building upon these results,we design efficient heuristics for general applications.Extensive simulations demonstrate the usefulness of co-schedulingwhen our efficient cache partitioning strategies are deployed
OS-Assisted Task Preemption for Hadoop
This work introduces a new task preemption primitive for Hadoop, that allows
tasks to be suspended and resumed exploiting existing memory management
mechanisms readily available in modern operating systems. Our technique fills
the gap that exists between the two extremes cases of killing tasks (which
waste work) or waiting for their completion (which introduces latency):
experimental results indicate superior performance and very small overheads
when compared to existing alternatives
MISO: Exploiting Multi-Instance GPU Capability on Multi-Tenant Systems for Machine Learning
GPU technology has been improving at an expedited pace in terms of size and
performance, empowering HPC and AI/ML researchers to advance the scientific
discovery process. However, this also leads to inefficient resource usage, as
most GPU workloads, including complicated AI/ML models, are not able to utilize
the GPU resources to their fullest extent -- encouraging support for GPU
multi-tenancy. We propose MISO, a technique to exploit the Multi-Instance GPU
(MIG) capability on the latest NVIDIA datacenter GPUs (e.g., A100, H100) to
dynamically partition GPU resources among co-located jobs. MISO's key insight
is to use the lightweight, more flexible Multi-Process Service (MPS) capability
to predict the best MIG partition allocation for different jobs, without
incurring the overhead of implementing them during exploration. Due to its
ability to utilize GPU resources more efficiently, MISO achieves 49% and 16%
lower average job completion time than the unpartitioned and optimal static GPU
partition schemes, respectively
Framework for Automated Partitioning of Scientific Workflows on the Cloud
Teaduslikud töövood on saanud populaarseks standardiks, et lihtsal viisil esitada ning lahendada erinevaid teaduslikke ülesandeid. Üldiselt koosnevad need töövood suurtest hulkadest ülesannetest, mis nõuavad tihti palju erinevaid arvuti ressursse, mistõttu jooksutatakse neid kas pilvearvutust, hajustöötlust või superarvuteid kasutades. Varem on tõestatud, et kui rakendada pilves töövoo erinevate osade jagamiseks k-way partitsioneerimis algoritmi, siis üleüldine kommunikatsioon pilves väheneb. Antud magistritöös programmeriti raamistik, et seda protsessi automatiseerida. Loodud raamistik võimaldab automaatselt partitsioneerida igasugusegi töövoo, mis on mõeldud Pegasuse programmiga jooksutamiseks. Raamistik, kasutades CloudML'i, seab automaatselt pilves üles klastri masinaid, konfigureerib ning sätestab kõik vajaliku tarkvara ning jooksutab ja partitsioneerib etteantud töövoo. Lisaks, kuvatakse pärast töövoo lõpetamist ka ajalise kalkulatsiooni visualisatsioon. Seda kasutades saab lõppkasutaja aimu, mitu tuuma peaks töövoo jooksutamiseks kasutama, et lõpetada eksperiment mingis kindlas ajavahemikus.Scientific workflows have become a standardized way for scientists to represent a set of tasks to overcome or solve a certain problem. Usually these workflows consist of numerous amount of jobs that are both CPU heavy and I/O intensive that are executed using some kind of workflow management system either on clouds, grids, supercomputers, etc. Previously, it has been shown that using k-way partitioning algorithm to distribute a workflow's tasks between multiple machines in the cloud reduces the overall data communication and therefore lowers the cost of the bandwidth usage. In this thesis, a framework was built in order to automate this process - partition any workflow submitted by a scientist that is meant to be run on Pegasus workflow management system in the cloud with ease. The framework provisions the instances in the cloud using CloudML, configures and installs all the software needed for the execution, runs and partitions the scientific workflow and finally shows the time estimation of the workflow, so that the user would have an approximate guidelines on, how many resources one should provision in order to finish an experiment under a certain time-frame
- …