1,693 research outputs found
Design and Implementation of Distributed Resource Management for Time Sensitive Applications
In this paper, we address distributed convergence to fair allocations of CPU
resources for time-sensitive applications. We propose a novel resource
management framework where a centralized objective for fair allocations is
decomposed into a pair of performance-driven recursive processes for updating:
(a) the allocation of computing bandwidth to the applications (resource
adaptation), executed by the resource manager, and (b) the service level of
each application (service-level adaptation), executed by each application
independently. We provide conditions under which the distributed recursive
scheme exhibits convergence to solutions of the centralized objective (i.e.,
fair allocations). Contrary to prior work on centralized optimization schemes,
the proposed framework exhibits adaptivity and robustness to changes both in
the number and nature of applications, while it assumes minimum information
available to both applications and the resource manager. We finally validate
our framework with simulations using the TrueTime toolbox in MATLAB/Simulink
Resource Management Algorithms for Computing Hardware Design and Operations: From Circuits to Systems
The complexity of computation hardware has increased at an unprecedented rate for the last few decades. On the computer chip level, we have entered the era of multi/many-core processors made of billions of transistors. With transistor budget of this scale, many functions are integrated into a single chip. As such, chips today consist of many heterogeneous cores with intensive interaction among these cores. On the circuit level, with the end of Dennard scaling, continuously shrinking process technology has imposed a grand challenge on power density. The variation of circuit further exacerbated the problem by consuming a substantial time margin. On the system level, the rise of Warehouse Scale Computers and Data Centers have put resource management into new perspective. The ability of dynamically provision computation resource in these gigantic systems is crucial to their performance. In this thesis, three different resource management algorithms are discussed. The first algorithm assigns adaptivity resource to circuit blocks with a constraint on the overhead. The adaptivity improves resilience of the circuit to variation in a cost-effective way. The second algorithm manages the link bandwidth resource in application specific Networks-on-Chip. Quality-of-Service is guaranteed for time-critical traffic in the algorithm with an emphasis on power. The third algorithm manages the computation resource of the data center with precaution on the ill states of the system. Q-learning is employed to meet the dynamic nature of the system and Linear Temporal Logic is leveraged as a tool to describe temporal constraints. All three algorithms are evaluated by various experiments. The experimental results are compared to several previous work and show the advantage of our methods
Intelligent Management of Mobile Systems through Computational Self-Awareness
Runtime resource management for many-core systems is increasingly complex.
The complexity can be due to diverse workload characteristics with conflicting
demands, or limited shared resources such as memory bandwidth and power.
Resource management strategies for many-core systems must distribute shared
resource(s) appropriately across workloads, while coordinating the high-level
system goals at runtime in a scalable and robust manner.
To address the complexity of dynamic resource management in many-core
systems, state-of-the-art techniques that use heuristics have been proposed.
These methods lack the formalism in providing robustness against unexpected
runtime behavior. One of the common solutions for this problem is to deploy
classical control approaches with bounds and formal guarantees. Traditional
control theoretic methods lack the ability to adapt to (1) changing goals at
runtime (i.e., self-adaptivity), and (2) changing dynamics of the modeled
system (i.e., self-optimization).
In this chapter, we explore adaptive resource management techniques that
provide self-optimization and self-adaptivity by employing principles of
computational self-awareness, specifically reflection. By supporting these
self-awareness properties, the system can reason about the actions it takes by
considering the significance of competing objectives, user requirements, and
operating conditions while executing unpredictable workloads
RAPID: Enabling Fast Online Policy Learning in Dynamic Public Cloud Environments
Resource sharing between multiple workloads has become a prominent practice
among cloud service providers, motivated by demand for improved resource
utilization and reduced cost of ownership. Effective resource sharing, however,
remains an open challenge due to the adverse effects that resource contention
can have on high-priority, user-facing workloads with strict Quality of Service
(QoS) requirements. Although recent approaches have demonstrated promising
results, those works remain largely impractical in public cloud environments
since workloads are not known in advance and may only run for a brief period,
thus prohibiting offline learning and significantly hindering online learning.
In this paper, we propose RAPID, a novel framework for fast, fully-online
resource allocation policy learning in highly dynamic operating environments.
RAPID leverages lightweight QoS predictions, enabled by
domain-knowledge-inspired techniques for sample efficiency and bias reduction,
to decouple control from conventional feedback sources and guide policy
learning at a rate orders of magnitude faster than prior work. Evaluation on a
real-world server platform with representative cloud workloads confirms that
RAPID can learn stable resource allocation policies in minutes, as compared
with hours in prior state-of-the-art, while improving QoS by 9.0x and
increasing best-effort workload performance by 19-43%
- …