4,823 research outputs found
A Constant-Factor Approximation for Wireless Capacity Maximization with Power Control in the SINR Model
In modern wireless networks, devices are able to set the power for each
transmission carried out. Experimental but also theoretical results indicate
that such power control can improve the network capacity significantly. We
study this problem in the physical interference model using SINR constraints.
In the SINR capacity maximization problem, we are given n pairs of senders
and receivers, located in a metric space (usually a so-called fading metric).
The algorithm shall select a subset of these pairs and choose a power level for
each of them with the objective of maximizing the number of simultaneous
communications. This is, the selected pairs have to satisfy the SINR
constraints with respect to the chosen powers.
We present the first algorithm achieving a constant-factor approximation in
fading metrics. The best previous results depend on further network parameters
such as the ratio of the maximum and the minimum distance between a sender and
its receiver. Expressed only in terms of n, they are (trivial) Omega(n)
approximations.
Our algorithm still achieves an O(log n) approximation if we only assume to
have a general metric space rather than a fading metric. Furthermore, by using
standard techniques the algorithm can also be used in single-hop and multi-hop
scheduling scenarios. Here, we also get polylog(n) approximations.Comment: 17 page
Leveraging Physical Layer Capabilites: Distributed Scheduling in Interference Networks with Local Views
In most wireless networks, nodes have only limited local information about
the state of the network, which includes connectivity and channel state
information. With limited local information about the network, each node's
knowledge is mismatched; therefore, they must make distributed decisions. In
this paper, we pose the following question - if every node has network state
information only about a small neighborhood, how and when should nodes choose
to transmit? While link scheduling answers the above question for
point-to-point physical layers which are designed for an interference-avoidance
paradigm, we look for answers in cases when interference can be embraced by
advanced PHY layer design, as suggested by results in network information
theory.
To make progress on this challenging problem, we propose a constructive
distributed algorithm that achieves rates higher than link scheduling based on
interference avoidance, especially if each node knows more than one hop of
network state information. We compare our new aggressive algorithm to a
conservative algorithm we have presented in [1]. Both algorithms schedule
sub-networks such that each sub-network can employ advanced
interference-embracing coding schemes to achieve higher rates. Our innovation
is in the identification, selection and scheduling of sub-networks, especially
when sub-networks are larger than a single link.Comment: 14 pages, Submitted to IEEE/ACM Transactions on Networking, October
201
Topology-aware GPU scheduling for learning workloads in cloud environments
Recent advances in hardware, such as systems with multiple GPUs and their availability in the cloud, are enabling deep learning in various domains including health care, autonomous vehicles, and Internet of Things. Multi-GPU systems exhibit complex connectivity among GPUs and between GPUs and CPUs. Workload schedulers must consider hardware topology and workload communication requirements in order to allocate CPU and GPU resources for optimal execution time and improved utilization in shared cloud environments.
This paper presents a new topology-aware workload placement strategy to schedule deep learning jobs on multi-GPU systems. The placement strategy is evaluated with a prototype on a Power8 machine with Tesla P100 cards, showing speedups of up to â1.30x compared to state-of-the-art strategies; the proposed algorithm achieves this result by allocating GPUs that satisfy workload requirements while preventing interference. Additionally, a large-scale simulation shows that the proposed strategy provides higher resource utilization and performance in cloud systems.This project is supported by the IBM/BSC Technology Center for Supercomputing
collaboration agreement. It has also received funding from the European Research Council (ERC) under the European Unionâs Horizon
2020 research and innovation programme (grant agreement No 639595). It is
also partially supported by the Ministry of Economy of Spain under contract
TIN2015-65316-P and Generalitat de Catalunya under contract 2014SGR1051,
by the ICREA Academia program, and by the BSC-CNS Severo Ochoa program
(SEV-2015-0493). We thank our IBM Research colleagues Alaa Youssef
and Asser Tantawi for the valuable discussions. We also thank SC17 committee
member Blair Bethwaite of Monash University for his constructive feedback on the earlier drafts of this paper.Peer ReviewedPostprint (published version
- âŠ