1,744 research outputs found
ERA: A Framework for Economic Resource Allocation for the Cloud
Cloud computing has reached significant maturity from a systems perspective,
but currently deployed solutions rely on rather basic economics mechanisms that
yield suboptimal allocation of the costly hardware resources. In this paper we
present Economic Resource Allocation (ERA), a complete framework for scheduling
and pricing cloud resources, aimed at increasing the efficiency of cloud
resources usage by allocating resources according to economic principles. The
ERA architecture carefully abstracts the underlying cloud infrastructure,
enabling the development of scheduling and pricing algorithms independently of
the concrete lower-level cloud infrastructure and independently of its
concerns. Specifically, ERA is designed as a flexible layer that can sit on top
of any cloud system and interfaces with both the cloud resource manager and
with the users who reserve resources to run their jobs. The jobs are scheduled
based on prices that are dynamically calculated according to the predicted
demand. Additionally, ERA provides a key internal API to pluggable algorithmic
modules that include scheduling, pricing and demand prediction. We provide a
proof-of-concept software and demonstrate the effectiveness of the architecture
by testing ERA over both public and private cloud systems -- Azure Batch of
Microsoft and Hadoop/YARN. A broader intent of our work is to foster
collaborations between economics and system communities. To that end, we have
developed a simulation platform via which economics and system experts can test
their algorithmic implementations
A Lazy Bailout Approach for Dual-Criticality Systems on Uniprocessor Platforms
© 2019 by the authors. Licensee MDPI, Basel, Switzerland.A challenge in the design of cyber-physical systems is to integrate the scheduling of tasks of different criticality, while still providing service guarantees for the higher critical tasks in case of resource-shortages caused by faults. While standard real-time scheduling is agnostic to the criticality of tasks, the scheduling of tasks with different criticalities is called mixed-criticality scheduling. In this paper we present the Lazy Bailout Protocol (LBP), a mixed-criticality scheduling method where low-criticality jobs overrunning their time budget cannot threaten the timeliness of high-criticality jobs while at the same time the method tries to complete as many low-criticality jobs as possible. The key principle of LBP is instead of immediately abandoning low-criticality jobs when a high-criticality job overruns its optimistic WCET estimate, to put them in a low-priority queue for later execution. To compare mixed-criticality scheduling methods we introduce a formal quality criterion for mixed-criticality scheduling, which, above all else, compares schedulability of high-criticality jobs and only afterwards the schedulability of low-criticality jobs. Based on this criterion we prove that LBP behaves better than the original {\em Bailout Protocol} (BP). We show that LBP can be further improved by slack time exploitation and by gain time collection at runtime, resulting in LBPSG. We also show that these improvements of LBP perform better than the analogous improvements based on BP.Peer reviewedFinal Published versio
An Analytical Solution for Probabilistic Guarantees of Reservation Based Soft Real-Time Systems
We show a methodology for the computation of the probability of deadline miss
for a periodic real-time task scheduled by a resource reservation algorithm. We
propose a modelling technique for the system that reduces the computation of
such a probability to that of the steady state probability of an infinite state
Discrete Time Markov Chain with a periodic structure. This structure is
exploited to develop an efficient numeric solution where different
accuracy/computation time trade-offs can be obtained by operating on the
granularity of the model. More importantly we offer a closed form conservative
bound for the probability of a deadline miss. Our experiments reveal that the
bound remains reasonably close to the experimental probability in one real-time
application of practical interest. When this bound is used for the optimisation
of the overall Quality of Service for a set of tasks sharing the CPU, it
produces a good sub-optimal solution in a small amount of time.Comment: IEEE Transactions on Parallel and Distributed Systems, Volume:27,
Issue: 3, March 201
Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior
This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic
causal model for predicting the behavior generated by modern percept-driven
robot plans. PHAMs represent aspects of robot behavior that cannot be
represented by most action models used in AI planning: the temporal structure
of continuous control processes, their non-deterministic effects, several modes
of their interferences, and the achievement of triggering conditions in
closed-loop robot plans.
The main contributions of this article are: (1) PHAMs, a model of concurrent
percept-driven behavior, its formalization, and proofs that the model generates
probably, qualitatively accurate predictions; and (2) a resource-efficient
inference method for PHAMs based on sampling projections from probabilistic
action models and state descriptions. We show how PHAMs can be applied to
planning the course of action of an autonomous robot office courier based on
analytical and experimental results
Scheduling in Mapreduce Clusters
MapReduce is a framework proposed by Google for processing huge amounts of data in a distributed environment. The simplicity of the programming model and the fault-tolerance feature of the framework make it very popular in Big Data processing.
As MapReduce clusters get popular, their scheduling becomes increasingly important. On one hand, many MapReduce applications have high performance requirements, for example, on response time and/or throughput. On the other hand, with the increasing size of MapReduce clusters, the energy-efficient scheduling of MapReduce clusters becomes inevitable. These scheduling challenges, however, have not been systematically studied.
The objective of this dissertation is to provide MapReduce applications with low cost and energy consumption through the development of scheduling theory and algorithms, energy models, and energy-aware resource management. In particular, we will investigate energy-efficient scheduling in hybrid CPU-GPU MapReduce clusters. This research work is expected to have a breakthrough in Big Data processing, particularly in providing green computing to Big Data applications such
as social network analysis, medical care data mining, and financial fraud detection. The tools we propose to develop are expected to increase utilization and reduce energy consumption for MapReduce clusters. In this PhD dissertation, we propose to address the aforementioned challenges by investigating and developing 1) a match-making scheduling algorithm for improving the data locality of Map- Reduce applications, 2) a real-time scheduling algorithm for heterogeneous Map- Reduce clusters, and 3) an energy-efficient scheduler for hybrid CPU-GPU Map- Reduce cluster.
Advisers: Ying Lu and David Swanso
- …