45,235 research outputs found
Adaptive Load Balancing: A Study in Multi-Agent Learning
We study the process of multi-agent reinforcement learning in the context of
load balancing in a distributed system, without use of either central
coordination or explicit communication. We first define a precise framework in
which to study adaptive load balancing, important features of which are its
stochastic nature and the purely local information available to individual
agents. Given this framework, we show illuminating results on the interplay
between basic adaptive behavior parameters and their effect on system
efficiency. We then investigate the properties of adaptive load balancing in
heterogeneous populations, and address the issue of exploration vs.
exploitation in that context. Finally, we show that naive use of communication
may not improve, and might even harm system efficiency.Comment: See http://www.jair.org/ for any accompanying file
Millimeter-wave Evolution for 5G Cellular Networks
Triggered by the explosion of mobile traffic, 5G (5th Generation) cellular
network requires evolution to increase the system rate 1000 times higher than
the current systems in 10 years. Motivated by this common problem, there are
several studies to integrate mm-wave access into current cellular networks as
multi-band heterogeneous networks to exploit the ultra-wideband aspect of the
mm-wave band. The authors of this paper have proposed comprehensive
architecture of cellular networks with mm-wave access, where mm-wave small cell
basestations and a conventional macro basestation are connected to
Centralized-RAN (C-RAN) to effectively operate the system by enabling power
efficient seamless handover as well as centralized resource control including
dynamic cell structuring to match the limited coverage of mm-wave access with
high traffic user locations via user-plane/control-plane splitting. In this
paper, to prove the effectiveness of the proposed 5G cellular networks with
mm-wave access, system level simulation is conducted by introducing an expected
future traffic model, a measurement based mm-wave propagation model, and a
centralized cell association algorithm by exploiting the C-RAN architecture.
The numerical results show the effectiveness of the proposed network to realize
1000 times higher system rate than the current network in 10 years which is not
achieved by the small cells using commonly considered 3.5 GHz band.
Furthermore, the paper also gives latest status of mm-wave devices and
regulations to show the feasibility of using mm-wave in the 5G systems.Comment: 17 pages, 12 figures, accepted to be published in IEICE Transactions
on Communications. (Mar. 2015
Mobility-aware QoS assurance in software-defined radio access networks: an analytical study
Software-defined networking (SDN) has gained a tremendous attention in the recent years, both in academia and industry. This revolutionary networking paradigm is an attempt to bring the advances in computer science and software engineering into the information and communications technology (ICT) domain. The aim of these efforts is to pave the way for completely programmable networks and control-data plane separation. Recent studies on feasibility and applicability of SDN concepts in cellular networks show very promising results and this trend will most likely continue in near future. In this work, we study the benefits of SDN on the radio resource management (RRM) of future-generation cellular networks. Our considered cellular network architecture is in line with the recently proposed Long-Term Evolution (LTE) Release 12 concepts, such as user/control plane split, heterogeneous networks (HetNets) environment, and network densification through deployment of small cells. In particular, the aim of our RRM scheme is to enable the macro base station (BS) to efficiently allocate radio resources for small cell BSs in order to assure quality-of-service (QoS) of moving users/vehicles during handovers. We develop an approximate, but very time- and space-efficient algorithm for radio resource allocation within a HetNet. Experiments on commodity hardware show algorithm running times in the order of a few seconds, thus making it suitable even in cases of fast moving users/vehicles. We also confirm a good accuracy of our proposed algorithm by means of computer simulations
A Cross-Domain Approach to Analyzing the Short-Run Impact of COVID-19 on the U.S. Electricity Sector
The novel coronavirus disease (COVID-19) has rapidly spread around the globe
in 2020, with the U.S. becoming the epicenter of COVID-19 cases since late
March. As the U.S. begins to gradually resume economic activity, it is
imperative for policymakers and power system operators to take a scientific
approach to understanding and predicting the impact on the electricity sector.
Here, we release a first-of-its-kind cross-domain open-access data hub,
integrating data from across all existing U.S. wholesale electricity markets
with COVID-19 case, weather, cellular location, and satellite imaging data.
Leveraging cross-domain insights from public health and mobility data, we
uncover a significant reduction in electricity consumption across that is
strongly correlated with the rise in the number of COVID-19 cases, degree of
social distancing, and level of commercial activity.Comment: This paper has been accepted for publication by Joule. The manuscript
can also be accessed from EnerarXiv:
http://www.enerarxiv.org/page/thesis.html?id=198
Using the High Productivity Language Chapel to Target GPGPU Architectures
It has been widely shown that GPGPU architectures offer large performance gains compared to their traditional CPU counterparts for many applications. The downside to these architectures is that the current programming models present numerous challenges to the programmer: lower-level languages, explicit data movement, loss of portability, and challenges in performance optimization. In this paper, we present novel methods and compiler transformations that increase productivity by enabling users to easily program GPGPU architectures using the high productivity programming language Chapel. Rather than resorting to different parallel libraries or annotations for a given parallel platform, we leverage a language that has been designed from first principles to address the challenge of programming for parallelism and locality. This also has the advantage of being portable across distinct classes of parallel architectures, including desktop multicores, distributed memory clusters, large-scale shared memory, and now CPU-GPU hybrids. We present experimental results from the Parboil benchmark suite which demonstrate that codes written in Chapel achieve performance comparable to the original versions implemented in CUDA.NSF CCF 0702260Cray Inc. Cray-SRA-2010-016962010-2011 Nvidia Research Fellowshipunpublishednot peer reviewe
A Computational Field Framework for Collaborative Task Execution in Volunteer Clouds
The increasing diffusion of cloud technologies is opening new opportunities for distributed and collaborative computing. Volunteer clouds are a prominent example, where participants join and leave the platform and collaborate by sharing their computational resources. The high dynamism and unpredictability of such scenarios call for decentralized self-* approaches to guarantee QoS. We present a simulation framework for collaborative task execution in volunteer clouds and propose one concrete instance based on Ant Colony Optimization, which is validated through a set of simulation experiments based on Google workload data
Peer to Peer Information Retrieval: An Overview
Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom
CoPhy: A Scalable, Portable, and Interactive Index Advisor for Large Workloads
Index tuning, i.e., selecting the indexes appropriate for a workload, is a
crucial problem in database system tuning. In this paper, we solve index tuning
for large problem instances that are common in practice, e.g., thousands of
queries in the workload, thousands of candidate indexes and several hard and
soft constraints. Our work is the first to reveal that the index tuning problem
has a well structured space of solutions, and this space can be explored
efficiently with well known techniques from linear optimization. Experimental
results demonstrate that our approach outperforms state-of-the-art commercial
and research techniques by a significant margin (up to an order of magnitude).Comment: VLDB201
- …