1,604 research outputs found

    Placement Delivery Array Design for Combination Networks with Edge Caching

    Full text link
    A major practical limitation of the Maddah-Ali-Niesen coded caching techniques is their high subpacketization level. For the simple network with a single server and multiple users, Yan \emph{et al.} proposed an alternative scheme with the so-called placement delivery arrays (PDA). Such a scheme requires slightly higher transmission rates but significantly reduces the subpacketization level. In this paper, we extend the PDA framework and propose three low-subpacketization schemes for combination networks, i.e., networks with a single server, multiple relays, and multiple cache-aided users that are connected to subsets of relays. One of the schemes achieves the cutset lower bound on the link rate when the cache memories are sufficiently large. Our other two schemes apply only to \emph{resolvable} combination networks. For these networks and for a wide range of cache sizes, the new schemes perform closely to the coded caching schemes that directly apply Maddah-Ali-Niesen scheme while having significantly reduced subpacketization levels.Comment: 5 pages, has been published at IEEE International Symposium on Information Theory (ISIT), Jun. 2018, Colorado, US

    A Novel Asymmetric Coded Placement in Combination Networks with end-user Caches

    Full text link
    The tradeoff between the user's memory size and the worst-case download time in the (H,r,M,N)(H,r,M,N) combination network is studied, where a central server communicates with KK users through HH immediate relays, and each user has local cache of size MM files and is connected to a different subset of rr relays. The main contribution of this paper is the design of a coded caching scheme with asymmetric coded placement by leveraging coordination among the relays, which was not exploited in past work. Mathematical analysis and numerical results show that the proposed schemes outperform existing schemes.Comment: 5 pages, 2 figures, ITA 201

    Achieving Spatial Scalability for Coded Caching over Wireless Networks

    Full text link
    The coded caching scheme proposed by Maddah-Ali and Niesen considers the delivery of files in a given content library to users through a deterministic error-free network where a common multicast message is sent to all users at a fixed rate, independent of the number of users. In order to apply this paradigm to a wireless network, it is important to make sure that the common multicast rate does not vanish as the number of users increases. This paper focuses on a variant of coded caching successively proposed for the so-called combination network, where the multicast message is further encoded by a Maximum Distance Separable (MDS) code and the MDS-coded blocks are simultaneously transmitted from different Edge Nodes (ENs) (e.g., base stations or access points). Each user is equipped with multiple antennas and can select to decode a desired number of EN transmissions, while either nulling of treating as noise the others, depending on their strength. The system is reminiscent of the so-called evolved Multimedia Broadcast Multicast Service (eMBMS), in the sense that the fundamental underlying transmission mechanism is multipoint multicasting, where each user can independently and individually (in a user-centric manner) decide which EN to decode, without any explicit association of users to ENs. We study the performance of the proposed system when users and ENs are distributed according to homogeneous Poisson Point Processes in the plane and the propagation is affected by Rayleigh fading and distance dependent pathloss. Our analysis allows the system optimization with respect to the MDS coding rate. Also, we show that the proposed system is fully scalable, in the sense that it can support an arbitrarily large number of users, while maintaining a non-vanishing per-user delivery rate.Comment: 30 pages, 9 figure

    Multi-access Coded Caching Schemes From Cross Resolvable Designs

    Full text link
    We present a novel caching and coded delivery scheme for a multi-access network where multiple users can have access to the same cache (shared cache) and any cache can assist multiple users. This scheme is obtained from resolvable designs satisfying certain conditions which we call {\it cross resolvable designs}. To be able to compare different multi-access coded schemes with different number of users we normalize the rate of the schemes by the number of users served. Based on this per-user-rate we show that our scheme performs better than the well known Maddah-Ali - Nieson (MaN) scheme and the recently proposed ("Multi-access coded caching : gains beyond cache-redundancy" by Serbetci, Parrinello and Elia) SPE scheme. It is shown that the resolvable designs from affine planes are cross resolvable designs and our scheme based on these performs better than the MaN scheme for large memory size cases. The exact size beyond which our performance is better is also presented. The SPE scheme considers only the cases where the product of the number of users and the normalized cache size is 2, whereas the proposed scheme allows different choices depending on the choice of the cross resolvable design.Comment: 14 pages, 7 Figures and 9 tables. In this version one subsection in Section IV and a new Section V has been adde

    On Combination Networks with Cache-aided Relays and Users

    Full text link
    Caching is an efficient way to reduce peak hour network traffic congestion by storing some contents at the user's cache without knowledge of later demands. Coded caching strategy was originally proposed by Maddah-Ali and Niesen to give an additional coded caching gain compared the conventional uncoded scheme. Under practical consideration, the caching model was recently considered in relay network, in particular the combination network, where the central server communicates with K=(Hr)K=\binom{H}{r} users (each is with a cache of MM files) through HH immediate relays, and each user is connected to a different r−r-subsets of relays. Several inner bounds and outer bounds were proposed for combination networks with end-user-caches. This paper extends the recent work by the authors on centralized combination networks with end-user caches to a more general setting, where both relays and users have caches. In contrast to the existing schemes in which the packets transmitted from the server are independent of the cached contents of relays, we propose a novel caching scheme by creating an additional coded caching gain to the transmitted load from the server with the help of the cached contents in relays. We also show that the proposed scheme outperforms the state-of-the-art approaches.Comment: 7 pages,2 figures, WSA 201

    A Survey on Low Latency Towards 5G: RAN, Core Network and Caching Solutions

    Full text link
    The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: RAN, core network, and caching. We also present a general overview of 5G cellular networks composed of software defined network (SDN), network function virtualization (NFV), caching, and mobile edge computing (MEC) capable of meeting latency and other 5G requirements.Comment: Accepted in IEEE Communications Surveys and Tutorial

    On Secure Coded Caching via Combinatorial Method

    Full text link
    Coded caching is an efficient way to reduce network traffic congestion during peak hours by storing some content at the user's local cache memory without knowledge of later demands. The goal of coded caching design is to minimize the transmission rate and the subpacketization. In practice the demand for each user is sensitive since one can get the other users' preferences when it gets the other users' demands. The first coded caching scheme with private demands was proposed by Wan et al. However the transmission rate and the subpacketization of their scheme increase with the file number stored in the library. In this paper we consider the following secure coded caching: prevent the wiretappers from obtaining any information about the files in the server and protect the demands from all the users in the delivery phase. We firstly introduce a combinatorial structure called secure placement delivery array (SPDA in short) to realize a coded caching scheme for our security setting. Then we obtain three classes of secure schemes by constructing SPDAs, where one of them is optimal. It is worth noting that the transmission rates and the subpacketizations of our schemes are independent to the file number. Furthermore, comparing with the previously known schemes with the same security setting, our schemes have significantly advantages on the subpacketizations and for some parameters have the advantage on the transmission rates.Comment: 13 page

    Linear Coded Caching Scheme for Centralized Networks

    Full text link
    Coded caching systems have been widely studied to reduce the data transmission during the peak traffic time. In practice, two important parameters of a coded caching system should be considered, i.e., the rate which is the maximum amount of the data transmission during the peak traffic time, and the subpacketization level, the number of divided packets of each file when we implement a coded caching scheme. We prefer to design a scheme with rate and packet number as small as possible since they reflect the transmission efficiency and complexity of the caching scheme, respectively. In this paper, we first characterize a coded caching scheme from the viewpoint of linear algebra and show that designing a linear coded caching scheme is equivalent to constructing three classes of matrices satisfying some rank conditions. Then based on the invariant linear subspaces and combinatorial design theory, several classes of new coded caching schemes over F2\mathbb{F}_2 are obtained by constructing these three classes of matrices. It turns out that the rate of our new rate is the same as the scheme construct by Yan et al. (IEEE Trans. Inf. Theory 63, 5821-5833, 2017), but the packet number is significantly reduced. A concatenating construction then is used for flexible number of users. Finally by means of these matrices, we show that the minimum storage regenerating codes can also be used to construct coded caching schemes.Comment: 23 page

    Kelly Cache Networks

    Full text link
    We study networks of M/M/1 queues in which nodes act as caches that store objects. Exogenous requests for objects are routed towards nodes that store them; as a result, object traffic in the network is determined not only by demand but, crucially, by where objects are cached. We determine how to place objects in caches to attain a certain design objective, such as, e.g., minimizing network congestion or retrieval delays. We show that for a broad class of objectives, including minimizing both the expected network delay and the sum of network queue lengths, this optimization problem can be cast as an NP- hard submodular maximization problem. We show that so-called continuous greedy algorithm attains a ratio arbitrarily close to 1−1/e≈0.631 - 1/e \approx 0.63 using a deterministic estimation via a power series; this drastically reduces execution time over prior art, which resorts to sampling. Finally, we show that our results generalize, beyond M/M/1 queues, to networks of M/M/k and symmetric M/D/1 queues.Comment: This is the extended version of the Infocom 2019 paper with the same title. The authors gratefully acknowledge support from National Science Foundation grant NeTS-1718355, as well as from research grants by Intel Corp. and Cisco System

    Content Caching and Delivery in Wireless Radio Access Networks

    Full text link
    Today's mobile data traffic is dominated by content-oriented traffic. Caching popular contents at the network edge can alleviate network congestion and reduce content delivery latency. This paper provides a comprehensive and unified study of caching and delivery techniques in wireless radio access networks (RANs) with caches at all edge nodes (ENs) and user equipments (UEs). Three cache-aided RAN architectures are considered: RANs without fronthaul, with dedicated fronthaul, and with wireless fronthaul. It first reviews in a tutorial nature how caching facilitates interference management in these networks by enabling interference cancellation (IC), zero-forcing (ZF), and interference alignment (IA). Then, two new delivery schemes are presented. One is for RANs with dedicated fronthaul, which considers centralized cache placement at the ENs but both centralized and decentralized placement at the UEs. This scheme combines IA, ZF, and IC together with soft-transfer fronthauling. The other is for RANs with wireless fronthaul, which considers decentralized cache placement at all nodes. It leverages the broadcast nature of wireless fronthaul to fetch not only uncached but also cached contents to boost transmission cooperation among the ENs. Numerical results show that both schemes outperform existing results for a wide range of system parameters, thanks to the various caching gains obtained opportunistically.Comment: To appear in IEEE Transactions on Communication
    • …
    corecore