8 research outputs found

    Multi-access Coded Caching with Optimal Rate and Linear Subpacketization under PDA and Consecutive Cyclic Placement

    Full text link
    This work considers the multi-access caching system proposed by Hachem et al., where each user has access to L neighboring caches in a cyclic wrap-around fashion. We first propose a placement strategy called the consecutive cyclic placement, which achieves the maximal local caching gain. Then under the consecutive cyclic placement, we derive the optimal coded caching gain from the perspective of Placement Delivery Array (PDA), thus obtaining a lower bound on the rate of PDA. Finally, under the consecutive cyclic placement, we construct a class of PDA, leading to a multi-access coded caching scheme with linear subpacketization, which achieves our derived lower bound for some parameters; while for other parameters, the achieved coded caching gain is only 1 less than the optimal one. Analytical and numerical comparisons of the proposed scheme with existing schemes are provided to validate the performance.Comment: 30 pages, 7 figure

    Coded Caching Schemes for Multiaccess Topologies via Combinatorial Design

    Full text link
    This paper studies a multiaccess coded caching (MACC) where the connectivity topology between the users and the caches can be described by a class of combinatorial designs. Our model includes as special cases several MACC topologies considered in previous works. The considered MACC network includes a server containing NN files, Γ\Gamma cache nodes and KK cacheless users, where each user can access LL cache nodes. The server is connected to the users via an error-free shared link, while the users can retrieve the cache content of the connected cache-nodes while the users can directly access the content in their connected cache-nodes. Our goal is to minimise the worst-case transmission load on the shared link in the delivery phase. The main limitation of the existing MACC works is that only some specific access topologies are considered, and thus the number of users KK should be either linear or exponential to Γ\Gamma. We overcome this limitation by formulating a new access topology derived from two classical combinatorial structures, referred to as the tt-design and the tt-group divisible design. In these topologies, KK scales linearly, polynomially, or even exponentially with Γ\Gamma. By leveraging the properties of the considered combinatorial structures, we propose two classes of coded caching schemes for a flexible number of users, where the number of users can scale linearly, polynomially or exponentially with the number of cache nodes. In addition, our schemes can unify most schemes for the shared link network and unify many schemes for the multi-access network except for the cyclic wrap-around topology.Comment: 48 page

    Hierarchical Cache-Aided Linear Function Retrieval with Security and Privacy Constraints

    Full text link
    The hierarchical caching system where a server connects with multiple mirror sites, each connecting with a distinct set of users, and both the mirror sites and users are equipped with caching memories has been widely studied. However all the existing works focus on single file retrieval, i.e., each user requests one file, and ignore the security and privacy threats in communications. In this paper we investigate the linear function retrieval problem for hierarchical caching systems with content security and demand privacy, i.e., each user requests a linear combination of files, and meanwhile the files in the library are protected against wiretappers and users' demands are kept unknown to other users and unconnected mirror sites. First we propose a new combination structure named hierarchical placement delivery array (HPDA), which characterizes the data placement and delivery strategy of a coded caching scheme. Then we construct two classes of HPDAs. Consequently two classes of schemes with or without security and privacy are obtained respectively where the first dedicates to minimizing the transmission load for the first hop and can achieve the optimal transmission load for the first hop if ignoring the security and privacy constraints; the second has more flexible parameters on the memory sizes and a lower subpacketization compared with the first one, and achieves a tradeoff between subpacketization and transmission loads.Comment: arXiv admin note: substantial text overlap with arXiv:2205.0023
    corecore