75 research outputs found
Cache-aided Interference Management Using Hypercube Combinatorial Cache Designs
We consider a cache-aided interference network which consists of a library of
files, transmitters and receivers (users), each equipped with a
local cache of size and files respectively, and connected via a
discrete-time additive white Gaussian noise channel. Each receiver requests an
arbitrary file from the library. The objective is to design a cache placement
without knowing the receivers' requests and a communication scheme such that
the sum Degrees of Freedom (sum-DoF) of the delivery is maximized. This network
model has been investigated by Naderializadeh {\em et al.}, who proposed a
prefetching and a delivery schemes that achieves a sum-DoF of
. One of biggest limitations of this
scheme is the requirement of high subpacketization level. This paper is the
first attempt in the literature (according to our knowledge) to reduce the file
subpacketization in such a network. In particular, we propose a new approach
for both prefetching and linear delivery schemes based on a combinatorial
design called {\em hypercube}. We show that required number of packets per file
can be exponentially reduced compared to the state of the art scheme proposed
by Naderializadeh {\em et al.}, or the NMA scheme. When , the achievable one-shot sum-DoF using this approach is
, which shows that 1) the one-shot sum-DoF scales
linearly with the aggregate cache size in the network and 2) it is within a
factor of to the information-theoretic optimum. Surprisingly, the identical
and near optimal sum-DoF performance can be achieved using the hypercube
approach with a much less file subpacketization.Comment: 6 pages, 4 figures, accepted by ICC 201
Edge Computing in the Dark: Leveraging Contextual-Combinatorial Bandit and Coded Computing
With recent advancements in edge computing capabilities, there has been a
significant increase in utilizing the edge cloud for event-driven and
time-sensitive computations. However, large-scale edge computing networks can
suffer substantially from unpredictable and unreliable computing resources
which can result in high variability of service quality. Thus, it is crucial to
design efficient task scheduling policies that guarantee quality of service and
the timeliness of computation queries. In this paper, we study the problem of
computation offloading over unknown edge cloud networks with a sequence of
timely computation jobs. Motivated by the MapReduce computation paradigm, we
assume each computation job can be partitioned to smaller Map functions that
are processed at the edge, and the Reduce function is computed at the user
after the Map results are collected from the edge nodes. We model the service
quality (success probability of returning result back to the user within
deadline) of each edge device as function of context (collection of factors
that affect edge devices). The user decides the computations to offload to each
device with the goal of receiving a recoverable set of computation results in
the given deadline. Our goal is to design an efficient edge computing policy in
the dark without the knowledge of the context or computation capabilities of
each device. By leveraging the \emph{coded computing} framework in order to
tackle failures or stragglers in computation, we formulate this problem using
contextual-combinatorial multi-armed bandits (CC-MAB), and aim to maximize the
cumulative expected reward. We propose an online learning policy called
\emph{online coded edge computing policy}, which provably achieves
asymptotically-optimal performance in terms of regret loss compared with the
optimal offline policy for the proposed CC-MAB problem
Center for Aeronautics and Space Information Sciences
This report summarizes the research done during 1991/92 under the Center for Aeronautics and Space Information Science (CASIS) program. The topics covered are computer architecture, networking, and neural nets
- …