542 research outputs found

    On the Placement Delivery Array Design in Centralized Coded Caching Scheme

    Full text link
    Caching is a promising solution to satisfy the ever increasing demands for the multi-media traffics. In caching networks, coded caching is a recently proposed technique that achieves significant performance gains over the uncoded caching schemes. However, to implement the coded caching schemes, each file has to be split into FF packets, which usually increases exponentially with the number of users KK. Thus, designing caching schemes that decrease the order of FF is meaningful for practical implementations. In this paper, by reviewing the Ali-Niesen caching scheme, the placement delivery array (PDA) design problem is firstly formulated to characterize the placement issue and the delivery issue with a single array. Moreover, we show that, through designing appropriate PDA, new centralized coded caching schemes can be discovered. Secondly, it is shown that the Ali-Niesen scheme corresponds to a special class of PDA, which realizes the best coding gain with the least FF. Thirdly, we present a new construction of PDA for the centralized caching system, wherein the cache size of each user MM (identical cache size is assumed at all users) and the number of files NN satisfies M/N=1/qM/N=1/q or (qβˆ’1)/q{(q-1)}/{q} (qq is an integer such that qβ‰₯2q\geq 2). The new construction can decrease the required FF from the order O(eKβ‹…(MNln⁑NM+(1βˆ’MN)ln⁑NNβˆ’M))O\left(e^{K\cdot\left(\frac{M}{N}\ln \frac{N}{M} +(1-\frac{M}{N})\ln \frac{N}{N-M}\right)}\right) of Ali-Niesen scheme to O(eKβ‹…MNln⁑NM)O\left(e^{K\cdot\frac{M}{N}\ln \frac{N}{M}}\right) or O(eKβ‹…(1βˆ’MN)ln⁑NNβˆ’M)O\left(e^{K\cdot(1-\frac{M}{N})\ln\frac{N}{N-M}}\right) respectively, while the coding gain loss is only 11.Comment: 21 pages, 2 figure

    From Centralized to Decentralized Coded Caching

    Full text link
    We consider the problem of designing decentralized schemes for coded caching. In this problem there are KK users each caching MM files out of a library of NN total files. The question is to minimize RR, the number of broadcast transmissions to satisfy all the user demands. Decentralized schemes allow the creation of each cache independently, allowing users to join or leave without dependencies. Previous work showed that to achieve a coding gain gg, i.e. R≀K(1βˆ’M/N)/gR \leq K (1-M/N)/g transmissions, each file has to be divided into number of subpackets that is exponential in gg. In this work we propose a simple translation scheme that converts any constant rate centralized scheme into a random decentralized placement scheme that guarantees a target coding gain of gg. If the file size in the original constant rate centralized scheme is subexponential in KK, then the file size for the resulting scheme is subexponential in gg. When new users join, the rest of the system remains the same. However, we require an additional communication overhead of O(log⁑K)O(\log K) bits to determine the new user's cache state. We also show that the worst-case rate guarantee degrades only by a constant factor due to the dynamics of user arrival and departure

    Linear Coded Caching Scheme for Centralized Networks

    Full text link
    Coded caching systems have been widely studied to reduce the data transmission during the peak traffic time. In practice, two important parameters of a coded caching system should be considered, i.e., the rate which is the maximum amount of the data transmission during the peak traffic time, and the subpacketization level, the number of divided packets of each file when we implement a coded caching scheme. We prefer to design a scheme with rate and packet number as small as possible since they reflect the transmission efficiency and complexity of the caching scheme, respectively. In this paper, we first characterize a coded caching scheme from the viewpoint of linear algebra and show that designing a linear coded caching scheme is equivalent to constructing three classes of matrices satisfying some rank conditions. Then based on the invariant linear subspaces and combinatorial design theory, several classes of new coded caching schemes over F2\mathbb{F}_2 are obtained by constructing these three classes of matrices. It turns out that the rate of our new rate is the same as the scheme construct by Yan et al. (IEEE Trans. Inf. Theory 63, 5821-5833, 2017), but the packet number is significantly reduced. A concatenating construction then is used for flexible number of users. Finally by means of these matrices, we show that the minimum storage regenerating codes can also be used to construct coded caching schemes.Comment: 23 page

    Towards Practical File Packetizations in Wireless Device-to-Device Caching Networks

    Full text link
    We consider wireless device-to-device (D2D) caching networks with single-hop transmissions. Previous work has demonstrated that caching and coded multicasting can significantly increase per user throughput. However, the state-of-the-art coded caching schemes for D2D networks are generally impractical because content files are partitioned into an exponential number of packets with respect to the number of users if both library and memory sizes are fixed. In this paper, we present two combinatorial approaches of D2D coded caching network design with reduced packetizations and desired throughput gain compared to the conventional uncoded unicasting. The first approach uses a "hypercube" design, where each user caches a "hyperplane" in this hypercube and the intersections of "hyperplanes" represent coded multicasting codewords. In addition, we extend the hypercube approach to a decentralized design. The second approach uses the Ruzsa-Szem\'eredi graph to define the cache placement. Disjoint matchings on this graph represent coded multicasting codewords. Both approaches yield an exponential reduction of packetizations while providing a per-user throughput that is comparable to the state-of-the-art designs in the literature. Furthermore, we apply spatial reuse to the new D2D network designs to further reduce the required packetizations and significantly improve per user throughput for some parameter regimes.Comment: 32 pages, 5 figure

    Coded Caching Schemes with Linear Subpacketizations

    Full text link
    In coded caching system we prefer to design a coded caching scheme with low subpacketization and small transmission rate (i.e., the low implementation complexity and the efficient transmission during the peak traffic times). Placement delivery arrays (PDA) can be used to design code caching schemes. In this paper we propose a framework of constructing PDAs via Hamming distance. As an application, two classes of coded caching schemes with linear subpacketizations and small transmission rates are obtained.Comment: 14 page

    Some new bounds of placement delivery arrays

    Full text link
    Coded caching scheme is a technique which reduce the load during peak traffic times in a wireless network system. Placement delivery array (PDA in short) was first introduced by Yan et al.. It can be used to design coded caching scheme. In this paper, we prove some lower bounds of PDA on the element and some lower bounds of PDA on the column. We also give some constructions for optimal PDA.Comment: Coded caching scheme, placement delivery array, optima

    Constructions of Coded Caching Schemes with Flexible Memory Size

    Full text link
    Coded caching scheme recently has become quite popular in the wireless network due to its effectively reducing the transmission amount (denote such an amount by RR) during peak traffic times. However to realize a coded caching scheme, each file must be divided into FF packets which usually increases the computation complexity of a coded caching scheme. So we prefer to construct a caching scheme that decreases the order of FF for practical implementations. In this paper, we construct four classes of new schemes where two classes can significantly reduce the value of FF by increasing a little RR comparing with the well known scheme proposed by Maddah-Ali and Niesen, and FF in the other two classes grows sub-exponentially with KK by sacrificing more RR. It is worth noting that a tradeoff between RR and FF, which is a hot topic in the field of caching scheme, is proposed by our constructions. In addition, our constructions include all the results constructed by Yan et al., (IEEE Trans. Inf. Theory 63, 5821-5833, 2017) and some main results obtained by Shangguan et al., (arXiv preprint arXiv:1608.03989v1) as the special cases.Comment: 18 page

    A framework of constructing placement delivery arrays for centralized coded caching

    Full text link
    In caching system, it is desirable to design a coded caching scheme with the transmission load RR and subpacketization FF as small as possible, in order to improve efficiency of transmission in the peak traffic times and to decrease implementation complexity. Yan et al. reformulated the centralized coded caching scheme as designing a corresponding FΓ—KF\times K array called placement delivery array (PDA), where FF is the subpacketization and KK is the number of users. Motivated by several constructions of PDAs, we introduce a framework for constructing PDAs, where each row is indexed by a row vector of some matrix called row index matrix and each column's index is labelled by an element of a direct product set. Using this framework, a new scheme is obtained, which can be regarded as a generalization of some previously known schemes. When KK is equal to (mt)qt{m\choose t}q^t for positive integers mm, tt with t<mt<m and qβ‰₯2q\geq 2, we show that the row index matrix must be an orthogonal array if all the users have the same memory size. Furthermore, the row index matrix must be a covering array if the coded gain is (mt){m\choose t}, which is the maximal coded gain under our framework. Consequently the lower bounds on the transmission load and subpacketization of the schemes are derived under our framework. Finally, using orthogonal arrays as the row index matrix, we obtain two more explicit classes of schemes which have significantly advantages on the subpacketization while the transmission load is equal or close to that of the schemes constructed by Shangguan et al. (IEEE Trans. Inf. Theory, 64, 5755-5766, 2018) for the same number of users and memory size.Comment: 13 page

    On Combination Networks with Cache-aided Relays and Users

    Full text link
    Caching is an efficient way to reduce peak hour network traffic congestion by storing some contents at the user's cache without knowledge of later demands. Coded caching strategy was originally proposed by Maddah-Ali and Niesen to give an additional coded caching gain compared the conventional uncoded scheme. Under practical consideration, the caching model was recently considered in relay network, in particular the combination network, where the central server communicates with K=(Hr)K=\binom{H}{r} users (each is with a cache of MM files) through HH immediate relays, and each user is connected to a different rβˆ’r-subsets of relays. Several inner bounds and outer bounds were proposed for combination networks with end-user-caches. This paper extends the recent work by the authors on centralized combination networks with end-user caches to a more general setting, where both relays and users have caches. In contrast to the existing schemes in which the packets transmitted from the server are independent of the cached contents of relays, we propose a novel caching scheme by creating an additional coded caching gain to the transmitted load from the server with the help of the cached contents in relays. We also show that the proposed scheme outperforms the state-of-the-art approaches.Comment: 7 pages,2 figures, WSA 201

    Multi-access Coded Caching Schemes From Cross Resolvable Designs

    Full text link
    We present a novel caching and coded delivery scheme for a multi-access network where multiple users can have access to the same cache (shared cache) and any cache can assist multiple users. This scheme is obtained from resolvable designs satisfying certain conditions which we call {\it cross resolvable designs}. To be able to compare different multi-access coded schemes with different number of users we normalize the rate of the schemes by the number of users served. Based on this per-user-rate we show that our scheme performs better than the well known Maddah-Ali - Nieson (MaN) scheme and the recently proposed ("Multi-access coded caching : gains beyond cache-redundancy" by Serbetci, Parrinello and Elia) SPE scheme. It is shown that the resolvable designs from affine planes are cross resolvable designs and our scheme based on these performs better than the MaN scheme for large memory size cases. The exact size beyond which our performance is better is also presented. The SPE scheme considers only the cases where the product of the number of users and the normalized cache size is 2, whereas the proposed scheme allows different choices depending on the choice of the cross resolvable design.Comment: 14 pages, 7 Figures and 9 tables. In this version one subsection in Section IV and a new Section V has been adde
    • …
    corecore