300 research outputs found
Optimal Scheduling for Asymmetric Multi-core Server Processors
The arrival rate of jobs at servers in a data-center can vary significantly over time. The servers in data-centers are typically multi-core processors, which allow jobs to be processed at different degrees of parallelism (DoPs), i.e., the number of threads spawned by a job. In this thesis, we show analytically as well as empirically that the DoP which minimizes the service time of jobs varies with the arrival rate of jobs. Also, recent trends have shown a move towards asymmetric multi-core server processors. These processors are made up of multiple clusters, each consisting of cores of different type and of which only one cluster can be turned on at a given point in time while the others remain “dark”. We show that the choice of the optimal cluster is dependent on the arrival rate. Based on these observations, we propose a run-time scheduler that determines the optimal DoP and performs inter- cluster migration to minimize the mean total service time. The main contributions of this thesis are
• We propose a queueing theoretic model to determine the mean service time of jobs as function of the DoP, number of parallel jobs and the cluster choice.
• Based on the queueing theoretic model, we show that both the optimal DoP and cluster choice are dependent on the job arrival rate, and propose a run-time scheduler that makes optimal optimal cluster migration and DoP selection decisions to minimize mean service time.4 month
Optimal Scheduling for Asymmetric Multi-core Server Processors
The arrival rate of jobs at servers in a data-center can vary significantly over time. The servers in data-centers are typically multi-core processors, which allow jobs to be processed at different degrees of parallelism (DoPs), i.e., the number of threads spawned by a job. In this thesis, we show analytically as well as empirically that the DoP which minimizes the service time of jobs varies with the arrival rate of jobs. Also, recent trends have shown a move towards asymmetric multi-core server processors. These processors are made up of multiple clusters, each consisting of cores of different type and of which only one cluster can be turned on at a given point in time while the others remain “dark”. We show that the choice of the optimal cluster is dependent on the arrival rate. Based on these observations, we propose a run-time scheduler that determines the optimal DoP and performs inter- cluster migration to minimize the mean total service time. The main contributions of this thesis are
• We propose a queueing theoretic model to determine the mean service time of jobs as function of the DoP, number of parallel jobs and the cluster choice.
• Based on the queueing theoretic model, we show that both the optimal DoP and cluster choice are dependent on the job arrival rate, and propose a run-time scheduler that makes optimal optimal cluster migration and DoP selection decisions to minimize mean service time.4 month
Model-Based Design, Analysis, and Implementations for Power and Energy-Efficient Computing Systems
Modern computing systems are becoming increasingly complex. On one end of
the spectrum, personal computers now commonly support multiple processing
cores, and, on the other end, Internet services routinely employ thousands of
servers in distributed locations to provide the desired service to its users. In
such complex systems, concerns about energy usage and power consumption
are increasingly important. Moreover, growing awareness of environmental
issues has added to the overall complexity by introducing new variables to the
problem. In this regard, the ability to abstractly focus on the relevant details
allows model-based design to help significantly in the analysis and solution of
such problems.
In this dissertation, we explore and analyze model-based design for energy
and power considerations in computing systems. Although the presented techniques
are more generally applicable, we focus their application on large-scale
Internet services operating in U.S. electricity markets. Internet services are becoming
increasingly popular in the ICT ecosystem of today. The physical infrastructure
to support such services is commonly based on a group of cooperative
data centers (DCs) operating in tandem. These DCs are geographically
distributed to provide security and timing guarantees for their customers. To
provide services to millions of customers, DCs employ hundreds of thousands
of servers. These servers consume a large amount of energy that is traditionally
produced by burning coal and employing other environmentally hazardous
methods, such as nuclear and gas power generation plants. This large energy
consumption results in significant and fast-growing financial and environmental
costs. Consequently, for protection of local and global environments, governing
bodies around the globe have begun to introduce legislation to encourage
energy consumers, especially corporate entities, to increase the share of
renewable energy (green energy) in their total energy consumption. However,
in U.S. electricity markets, green energy is usually more expensive than energy
generated from traditional sources like coal or petroleum.
We model the overall problem in three sub-areas and explore different approaches
aimed at reducing the environmental foot print and operating costs
of multi-site Internet services, while honoring the Quality of Service (QoS) constraints
as contracted in service level agreements (SLAs).
Firstly, we model the load distribution among member DCs of a multi-site Internet
service. The use of green energy is optimized considering different factors
such as (a) geographically and temporally variable electricity prices, (b)
the multitude of available energy sources to choose from at each DC, (c) the necessity
to support more than one SLA, and, (d) the requirements to offer more
than one service at each DC. Various approaches are presented for solving this
problem and extensive simulations using Google’s setup in North America are
used to evaluate the presented approaches.
Secondly, we explore the area of shaving the peaks in the energy demand of
large electricity consumers, such as DCs by using a battery-based energy storage
system. Electrical demand of DCs is typically peaky based on the usage
cycle of their customers. Resultant peaks in the electrical demand require development
and maintenance of a costlier energy delivery mechanism, and are
often met using expensive gas or diesel generators which often have a higher
environmental impact. To shave the peak power demand, a battery can be used
which is charged during low load and is discharged during the peak loads.
Since the batteries are costly, we present a scheme to estimate the size of battery
required for any variable electrical load. The electrical load is modeled using
the concept of arrival curves from Network Calculus. Our analysis mechanism
can help determine the appropriate battery size for a given load arrival curve
to reduce the peak.
Thirdly, we present techniques to employ intra-DC scheduling to regulate the
peak power usage of each DC. The model we develop is equally applicable to
an individual server with multi-/many-core chips as well as a complete DC
with an intermix of homogeneous and heterogeneous servers. We evaluate
these approaches on single-core and multi-core chip processors and present the
results.
Overall, our work demonstrates the value of model-based design for intelligent
load distribution across DCs, storage integration, and per DC optimizations
for efficient energy management to reduce operating costs and environmental
footprint for multi-site Internet services
Bubble budgeting: throughput optimization for dynamic workloads by exploiting dark cores in many core systems
All the cores of a many-core chip cannot be active at the same time, due to reasons like low CPU utilization in server systems and limited power budget in dark silicon era. These free cores (referred to as bubbles) can be placed near active cores for heat dissipation so that the active cores can run at a higher frequency level, boosting the performance of applications that run on active cores. Budgeting inactive cores (bubbles) to applications to boost performance has the following three challenges. First, the number of bubbles varies due to open workloads. Second, communication distance increases when a bubble is inserted between two communicating tasks (a task is a thread or process of a parallel application), leading to performance degradation. Third, budgeting too many bubbles as coolers to running applications leads to insufficient cores for future applications. In order to address these challenges, in this paper, a bubble budgeting scheme is proposed to budget free cores to each application so as to optimize the throughput of the whole system. Throughput of the system depends on the execution time of each application and the waiting time incurred for newly arrived applications. Essentially, the proposed algorithm determines the number and locations of bubbles to optimize the performance and waiting time of each application, followed by tasks of each application being mapped to a core region. A Rollout algorithm is used to budget power to the cores as the last step. Experiments show that our approach achieves 50 percent higher throughput when compared to state-of-the-art thermal-aware runtime task mapping approaches. The runtime overhead of the proposed algorithm is in the order of 1M cycles, making it an efficient runtime task management method for large-scale many-core systems
Design Space Exploration and Resource Management of Multi/Many-Core Systems
The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends
ENERGY-TIME PERFORMANCE OF HETEROGENEOUS COMPUTING SYSTEMS: MODELS AND ANALYSIS
Ph.DDOCTOR OF PHILOSOPH
A Survey of Research into Mixed Criticality Systems
This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards
Resource and thermal management in 3D-stacked multi-/many-core systems
Continuous semiconductor technology scaling and the rapid increase in computational needs have stimulated the emergence of multi-/many-core processors. While up to hundreds of cores can be placed on a single chip, the performance capacity of the cores cannot be fully exploited due to high latencies of interconnects and memory, high power consumption, and low manufacturing yield in traditional (2D) chips. 3D stacking is an emerging technology that aims to overcome these limitations of 2D designs by stacking processor dies over each other and using through-silicon-vias (TSVs) for on-chip communication, and thus, provides a large amount of on-chip resources and shortens communication latency. These benefits, however, are limited by challenges in high power densities and temperatures.
3D stacking also enables integrating heterogeneous technologies into a single chip. One example of heterogeneous integration is building many-core systems with silicon-photonic network-on-chip (PNoC), which reduces on-chip communication latency significantly and provides higher bandwidth compared to electrical links. However, silicon-photonic links are vulnerable to on-chip thermal and process variations. These variations can be countered by actively tuning the temperatures of optical devices through micro-heaters, but at the cost of substantial power overhead.
This thesis claims that unearthing the energy efficiency potential of 3D-stacked systems requires intelligent and application-aware resource management. Specifically, the thesis improves energy efficiency of 3D-stacked systems via three major components of computing systems: cache, memory, and on-chip communication. We analyze characteristics of workloads in computation, memory usage, and communication, and present techniques that leverage these characteristics for energy-efficient computing.
This thesis introduces 3D cache resource pooling, a cache design that allows for flexible heterogeneity in cache configuration across a 3D-stacked system and improves cache utilization and system energy efficiency. We also demonstrate the impact of resource pooling on a real prototype 3D system with scratchpad memory.
At the main memory level, we claim that utilizing heterogeneous memory modules and memory object level management significantly helps with energy efficiency. This thesis proposes a memory management scheme at a finer granularity: memory object level, and a page allocation policy to leverage the heterogeneity of available memory modules and cater to the diverse memory requirements of workloads.
On the on-chip communication side, we introduce an approach to limit the power overhead of PNoC in (3D) many-core systems through cross-layer thermal management. Our proposed thermally-aware workload allocation policies coupled with an adaptive thermal tuning policy minimize the required thermal tuning power for PNoC, and in this way, help broader integration of PNoC. The thesis also introduces techniques in placement and floorplanning of optical devices to reduce optical loss and, thus, laser source power consumption.2018-03-09T00:00:00
Scalable Task Schedulers for Many-Core Architectures
This thesis develops schedulers for many-cores with different optimization objectives. The proposed schedulers are designed to be scale up as the number of cores in many-cores increase while continuing to provide guarantees on the quality of the schedule
- …