1,200 research outputs found
Optimizing MDS Codes for Caching at the Edge
In this paper we investigate the problem of optimal MDS-encoded cache
placement at the wireless edge to minimize the backhaul rate in heterogeneous
networks. We derive the backhaul rate performance of any caching scheme based
on file splitting and MDS encoding and we formulate the optimal caching scheme
as a convex optimization problem. We then thoroughly investigate the
performance of this optimal scheme for an important heterogeneous network
scenario. We compare it to several other caching strategies and we analyze the
influence of the system parameters, such as the popularity and size of the
library files and the capabilities of the small-cell base stations, on the
overall performance of our optimal caching strategy. Our results show that the
careful placement of MDS-encoded content in caches at the wireless edge leads
to a significant decrease of the load of the network backhaul and hence to a
considerable performance enhancement of the network.Comment: to appear in Globecom 201
Fulcrum: Flexible Network Coding for Heterogeneous Devices
ProducciĂłn CientĂficaWe introduce Fulcrum, a network coding framework that achieves three seemingly conflicting objectives: 1) to reduce the coding coefficient overhead down to nearly n bits per packet in a generation of n packets; 2) to conduct the network coding using only Galois field GF(2) operations at intermediate nodes if necessary, dramatically reducing computing complexity in the network; and 3) to deliver an end-to-end performance that is close to that of a high-field network coding system for high-end receivers, while simultaneously catering to low-end receivers that decode in GF(2). As a consequence of 1) and 3), Fulcrum has a unique trait missing so far in the network coding literature: providing the network with the flexibility to distribute computational complexity over different devices depending on their current load, network conditions, or energy constraints. At the core of our framework lies the idea of precoding at the sources using an expansion field GF(2 h ), h > 1, to increase the number of dimensions seen by the network. Fulcrum can use any high-field linear code for precoding, e.g., Reed-Solomon or Random Linear Network Coding (RLNC). Our analysis shows that the number of additional dimensions created during precoding controls the trade-off between delay, overhead, and computing complexity. Our implementation and measurements show that Fulcrum achieves similar decoding probabilities as high field RLNC but with encoders and decoders that are an order of magnitude faster.Green Mobile Cloud project (grant DFF-0602-01372B)Colorcast project (grant DFF-0602-02661B)TuneSCode project (grant DFF - 1335-00125)Danish Council for Independent Research (grant DFF-4002-00367)Ministerio de EconomĂa, Industria y Competitividad - Fondo Europeo de Desarrollo Regional (grants MTM2012-36917-C03-03 / MTM2015-65764-C3-2-P / MTM2015-69138-REDT)Agencia Estatal de InvestigaciĂłn - Fondo Social Europeo (grant RYC-2016-20208)Aarhus Universitets Forskningsfond Starting (grant AUFF-2017-FLS-7-1
Secure Partial Repair in Wireless Caching Networks with Broadcast Channels
We study security in partial repair in wireless caching networks where parts
of the stored packets in the caching nodes are susceptible to be erased. Let us
denote a caching node that has lost parts of its stored packets as a sick
caching node and a caching node that has not lost any packet as a healthy
caching node. In partial repair, a set of caching nodes (among sick and healthy
caching nodes) broadcast information to other sick caching nodes to recover the
erased packets. The broadcast information from a caching node is assumed to be
received without any error by all other caching nodes. All the sick caching
nodes then are able to recover their erased packets, while using the broadcast
information and the nonerased packets in their storage as side information. In
this setting, if an eavesdropper overhears the broadcast channels, it might
obtain some information about the stored file. We thus study secure partial
repair in the senses of information-theoretically strong and weak security. In
both senses, we investigate the secrecy caching capacity, namely, the maximum
amount of information which can be stored in the caching network such that
there is no leakage of information during a partial repair process. We then
deduce the strong and weak secrecy caching capacities, and also derive the
sufficient finite field sizes for achieving the capacities. Finally, we propose
optimal secure codes for exact partial repair, in which the recovered packets
are exactly the same as erased packets.Comment: To Appear in IEEE Conference on Communication and Network Security
(CNS
Caching at the Edge with LT codes
We study the performance of caching schemes based on LT under peeling
(iterative) decoding algorithm. We assume that users ask for downloading
content to multiple cache-aided transmitters. Transmitters are connected
through a backhaul link to a master node while no direct link exists between
users and the master node. Each content is fragmented and coded with LT code.
Cache placement at each transmitter is optimized such that transmissions over
the backhaul link is minimized. We derive a closed form expression for the
calculation of the backhaul transmission rate. We compare the performance of a
caching scheme based on LT with respect to a caching scheme based on maximum
distance separable codes. Finally, we show that caching with \acl{LT} codes
behave as good as caching with maximum distance separable codes
Caching at the Edge with Fountain Codes
We address the use of linear randon fountain codes caching schemes in a
heterogeneous satellite network. We consider a system composed of multiple hubs
and a geostationary Earth orbit satellite. Coded content is memorized in hubs'
caches in order to serve immediately the user requests and reduce the usage of
the satellite backhaul link. We derive the analytical expression of the average
backhaul rate, as well as a tight upper bound to it with a simple expression.
Furthermore, we derive the optimal caching strategy which minimizes the average
backhaul rate and compare the performance of the linear random fountain code
scheme to that of a scheme using maximum distance separable codes. Our
simulation results indicate that the performance obtained using fountain codes
is similar to that of maximum distance separable codes
Secure and Private Cloud Storage Systems with Random Linear Fountain Codes
An information theoretic approach to security and privacy called Secure And
Private Information Retrieval (SAPIR) is introduced. SAPIR is applied to
distributed data storage systems. In this approach, random combinations of all
contents are stored across the network. Our coding approach is based on Random
Linear Fountain (RLF) codes. To retrieve a content, a group of servers
collaborate with each other to form a Reconstruction Group (RG). SAPIR achieves
asymptotic perfect secrecy if at least one of the servers within an RG is not
compromised. Further, a Private Information Retrieval (PIR) scheme based on
random queries is proposed. The PIR approach ensures the users privately
download their desired contents without the servers knowing about the requested
contents indices. The proposed scheme is adaptive and can provide privacy
against a significant number of colluding servers.Comment: 8 pages, 2 figure
Centralized Coded Caching with User Cooperation
In this paper, we consider the coded-caching broadcast network with user
cooperation, where a server connects with multiple users and the users can
cooperate with each other through a cooperation network. We propose a
centralized coded caching scheme based on a new deterministic placement
strategy and a parallel delivery strategy. It is shown that the new scheme
optimally allocate the communication loads on the server and users, obtaining
cooperation gain and parallel gain that greatly reduces the transmission delay.
Furthermore, we show that the number of users who parallelly send information
should decrease when the users' caching size increases. In other words, letting
more users parallelly send information could be harmful. Finally, we derive a
constant multiplicative gap between the lower bound and upper bound on the
transmission delay, which proves that our scheme is order optimal.Comment: 9 pages, submitted to ITW201
- …