55,394 research outputs found
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
Optimal Net-Load Balancing in Smart Grids with High PV Penetration
Mitigating Supply-Demand mismatch is critical for smooth power grid
operation. Traditionally, load curtailment techniques such as Demand Response
(DR) have been used for this purpose. However, these cannot be the only
component of a net-load balancing framework for Smart Grids with high PV
penetration. These grids can sometimes exhibit supply surplus causing
over-voltages. Supply curtailment techniques such as Volt-Var Optimizations are
complex and computationally expensive. This increases the complexity of
net-load balancing systems used by the grid operator and limits their
scalability. Recently new technologies have been developed that enable the
rapid and selective connection of PV modules of an installation to the grid.
Taking advantage of these advancements, we develop a unified optimal net-load
balancing framework which performs both load and solar curtailment. We show
that when the available curtailment values are discrete, this problem is
NP-hard and develop bounded approximation algorithms for minimizing the
curtailment cost. Our algorithms produce fast solutions, given the tight timing
constraints required for grid operation. We also incorporate the notion of
fairness to ensure that curtailment is evenly distributed among all the nodes.
Finally, we develop an online algorithm which performs net-load balancing using
only data available for the current interval. Using both theoretical analysis
and practical evaluations, we show that our net-load balancing algorithms
provide solutions which are close to optimal in a small amount of time.Comment: 11 pages. To be published in the 4th ACM International Conference on
Systems for Energy-Efficient Built Environments (BuildSys 17) Changes from
previous version: Fixed a bug in Algorithm 1 which was causing some min cost
solutions to be misse
Improved Analysis of Deterministic Load-Balancing Schemes
We consider the problem of deterministic load balancing of tokens in the
discrete model. A set of processors is connected into a -regular
undirected network. In every time step, each processor exchanges some of its
tokens with each of its neighbors in the network. The goal is to minimize the
discrepancy between the number of tokens on the most-loaded and the
least-loaded processor as quickly as possible.
Rabani et al. (1998) present a general technique for the analysis of a wide
class of discrete load balancing algorithms. Their approach is to characterize
the deviation between the actual loads of a discrete balancing algorithm with
the distribution generated by a related Markov chain. The Markov chain can also
be regarded as the underlying model of a continuous diffusion algorithm. Rabani
et al. showed that after time , any algorithm of their
class achieves a discrepancy of , where is the spectral
gap of the transition matrix of the graph, and is the initial load
discrepancy in the system.
In this work we identify some natural additional conditions on deterministic
balancing algorithms, resulting in a class of algorithms reaching a smaller
discrepancy. This class contains well-known algorithms, eg., the Rotor-Router.
Specifically, we introduce the notion of cumulatively fair load-balancing
algorithms where in any interval of consecutive time steps, the total number of
tokens sent out over an edge by a node is the same (up to constants) for all
adjacent edges. We prove that algorithms which are cumulatively fair and where
every node retains a sufficient part of its load in each step, achieve a
discrepancy of in time . We
also show that in general neither of these assumptions may be omitted without
increasing discrepancy. We then show by a combinatorial potential reduction
argument that any cumulatively fair scheme satisfying some additional
assumptions achieves a discrepancy of almost as quickly as the
continuous diffusion process. This positive result applies to some of the
simplest and most natural discrete load balancing schemes.Comment: minor corrections; updated literature overvie
Recommended from our members
Improving DBMS performance through diverse redundancy
Database replication is widely used to improve both fault tolerance and DBMS performance. Non-diverse database replication has a significant limitation - it is effective against crash failures only. Diverse redundancy is an effective mechanism of tolerating a wider range of failures, including many non-crash failures. However it has not been adopted in practice because many see DBMS performance as the main concern. In this paper we show experimental evidence that diverse redundancy (diverse replication) can bring benefits in terms of DBMS performance, too. We report on experimental results with an optimistic architecture built with two diverse DBMSs under a load derived from TPC-C benchmark, which show that a diverse pair performs faster not only than non-diverse pairs but also than the individual copies of the DBMSs used. This result is important because it shows potential for DBMS performance better than anything achievable with the available off-the-shelf servers
A survey of machine learning techniques applied to self organizing cellular networks
In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
- …