47,184 research outputs found
Demand-Aware Network Designs of Bounded Degree
Traditionally, networks such as datacenter interconnects are designed to optimize worst-case performance under arbitrary traffic patterns. Such network designs can however be far from optimal when considering the actual workloads and traffic patterns which they serve. This insight led to the development of demand-aware datacenter interconnects which can be reconfigured depending on the workload.
Motivated by these trends, this paper initiates the algorithmic study of demand-aware networks (DANs), and in particular the design of bounded-degree networks. The inputs to the network design problem are a discrete communication request distribution, D, defined over communicating pairs from the node set V, and a bound, d, on the maximum degree. In turn, our objective is to design an (undirected) demand-aware network N = (V,E) of bounded-degree d, which provides short routing paths between frequently communicating nodes distributed across N. In particular, the designed network should minimize the expected path length on N (with respect to D), which is a basic measure of the efficiency of the network.
We show that this fundamental network design problem exhibits interesting connections to several classic combinatorial problems and to information theory. We derive a general lower bound based on the entropy of the communication pattern D, and present asymptotically optimal network-aware design algorithms for important distribution families, such as sparse distributions and distributions of locally bounded doubling dimensions
Demand-Aware Network Design with Steiner Nodes and a Connection to Virtual Network Embedding
Emerging optical and virtualization technologies enable the design of more
flexible and demand-aware networked systems, in which resources can be
optimized toward the actual workload they serve. For example, in a demand-aware
datacenter network, frequently communicating nodes (e.g., two virtual machines
or a pair of racks in a datacenter) can be placed topologically closer,
reducing communication costs and hence improving the overall network
performance.
This paper revisits the bounded-degree network design problem underlying such
demand-aware networks. Namely, given a distribution over communicating server
pairs, we want to design a network with bounded maximum degree that minimizes
expected communication distance. In addition to this known problem, we
introduce and study a variant where we allow Steiner nodes (i.e., additional
routers) to be added to augment the network.
We improve the understanding of this problem domain in several ways. First,
we shed light on the complexity and hardness of the aforementioned problems,
and study a connection between them and the virtual networking embedding
problem. We then provide a constant-factor approximation algorithm for the
Steiner node version of the problem, and use it to improve over prior
state-of-the-art algorithms for the original version of the problem with sparse
communication distributions. Finally, we investigate various heuristic
approaches to bounded-degree network design problem, in particular providing a
reliable heuristic algorithm with good experimental performance.
We report on an extensive empirical evaluation, using several real-world
traffic traces from datacenters, and find that our approach results in improved
demand-aware network designs
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Going deeper and wider in neural architectures improves the accuracy, while
the limited GPU DRAM places an undesired restriction on the network design
domain. Deep Learning (DL) practitioners either need change to less desired
network architectures, or nontrivially dissect a network across multiGPUs.
These distract DL practitioners from concentrating on their original machine
learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling
runtime to enable the network training far beyond the GPU DRAM capacity.
SuperNeurons features 3 memory optimizations, \textit{Liveness Analysis},
\textit{Unified Tensor Pool}, and \textit{Cost-Aware Recomputation}, all
together they effectively reduce the network-wide peak memory usage down to the
maximal memory usage among layers. We also address the performance issues in
those memory saving techniques. Given the limited GPU DRAM, SuperNeurons not
only provisions the necessary memory for the training, but also dynamically
allocates the memory for convolution workspaces to achieve the high
performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have
demonstrated that SuperNeurons trains at least 3.2432 deeper network than
current ones with the leading performance. Particularly, SuperNeurons can train
ResNet2500 that has basic network layers on a 12GB K40c.Comment: PPoPP '2018: 23nd ACM SIGPLAN Symposium on Principles and Practice of
Parallel Programmin
- …