79 research outputs found

    Centralized Coded Caching with User Cooperation

    Full text link
    In this paper, we consider the coded-caching broadcast network with user cooperation, where a server connects with multiple users and the users can cooperate with each other through a cooperation network. We propose a centralized coded caching scheme based on a new deterministic placement strategy and a parallel delivery strategy. It is shown that the new scheme optimally allocate the communication loads on the server and users, obtaining cooperation gain and parallel gain that greatly reduces the transmission delay. Furthermore, we show that the number of users who parallelly send information should decrease when the users' caching size increases. In other words, letting more users parallelly send information could be harmful. Finally, we derive a constant multiplicative gap between the lower bound and upper bound on the transmission delay, which proves that our scheme is order optimal.Comment: 9 pages, submitted to ITW201

    Multi-access Coded Caching with Optimal Rate and Linear Subpacketization under PDA and Consecutive Cyclic Placement

    Full text link
    This work considers the multi-access caching system proposed by Hachem et al., where each user has access to L neighboring caches in a cyclic wrap-around fashion. We first propose a placement strategy called the consecutive cyclic placement, which achieves the maximal local caching gain. Then under the consecutive cyclic placement, we derive the optimal coded caching gain from the perspective of Placement Delivery Array (PDA), thus obtaining a lower bound on the rate of PDA. Finally, under the consecutive cyclic placement, we construct a class of PDA, leading to a multi-access coded caching scheme with linear subpacketization, which achieves our derived lower bound for some parameters; while for other parameters, the achieved coded caching gain is only 1 less than the optimal one. Analytical and numerical comparisons of the proposed scheme with existing schemes are provided to validate the performance.Comment: 30 pages, 7 figure

    Hierarchical Cache-Aided Linear Function Retrieval with Security and Privacy Constraints

    Full text link
    The hierarchical caching system where a server connects with multiple mirror sites, each connecting with a distinct set of users, and both the mirror sites and users are equipped with caching memories has been widely studied. However all the existing works focus on single file retrieval, i.e., each user requests one file, and ignore the security and privacy threats in communications. In this paper we investigate the linear function retrieval problem for hierarchical caching systems with content security and demand privacy, i.e., each user requests a linear combination of files, and meanwhile the files in the library are protected against wiretappers and users' demands are kept unknown to other users and unconnected mirror sites. First we propose a new combination structure named hierarchical placement delivery array (HPDA), which characterizes the data placement and delivery strategy of a coded caching scheme. Then we construct two classes of HPDAs. Consequently two classes of schemes with or without security and privacy are obtained respectively where the first dedicates to minimizing the transmission load for the first hop and can achieve the optimal transmission load for the first hop if ignoring the security and privacy constraints; the second has more flexible parameters on the memory sizes and a lower subpacketization compared with the first one, and achieves a tradeoff between subpacketization and transmission loads.Comment: arXiv admin note: substantial text overlap with arXiv:2205.0023

    DPFormer: Learning Differentially Private Transformer on Long-Tailed Data

    Full text link
    The Transformer has emerged as a versatile and effective architecture with broad applications. However, it still remains an open problem how to efficiently train a Transformer model of high utility with differential privacy guarantees. In this paper, we identify two key challenges in learning differentially private Transformers, i.e., heavy computation overhead due to per-sample gradient clipping and unintentional attention distraction within the attention mechanism. In response, we propose DPFormer, equipped with Phantom Clipping and Re-Attention Mechanism, to address these challenges. Our theoretical analysis shows that DPFormer can reduce computational costs during gradient clipping and effectively mitigate attention distraction (which could obstruct the training process and lead to a significant performance drop, especially in the presence of long-tailed data). Such analysis is further corroborated by empirical results on two real-world datasets, demonstrating the efficiency and effectiveness of the proposed DPFormer

    One-Bit Byzantine-Tolerant Distributed Learning via Over-the-Air Computation

    Full text link
    Distributed learning has become a promising computational parallelism paradigm that enables a wide scope of intelligent applications from the Internet of Things (IoT) to autonomous driving and the healthcare industry. This paper studies distributed learning in wireless data center networks, which contain a central edge server and multiple edge workers to collaboratively train a shared global model and benefit from parallel computing. However, the distributed nature causes the vulnerability of the learning process to faults and adversarial attacks from Byzantine edge workers, as well as the severe communication and computation overhead induced by the periodical information exchange process. To achieve fast and reliable model aggregation in the presence of Byzantine attacks, we develop a signed stochastic gradient descent (SignSGD)-based Hierarchical Vote framework via over-the-air computation (AirComp), where one voting process is performed locally at the wireless edge by taking advantage of Bernoulli coding while the other is operated over-the-air at the central edge server by utilizing the waveform superposition property of the multiple-access channels. We comprehensively analyze the proposed framework on the impacts including Byzantine attacks and the wireless environment (channel fading and receiver noise), followed by characterizing the convergence behavior under non-convex settings. Simulation results validate our theoretical achievements and demonstrate the robustness of our proposed framework in the presence of Byzantine attacks and receiver noise.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Coded Caching Schemes for Two-dimensional Caching-aided Ultra-Dense Networks

    Full text link
    Coded caching technique is an efficient approach to reduce the transmission load in networks and has been studied in heterogeneous network settings in recent years. In this paper, we consider a new widespread caching system called (K1,K2,U,r,M,N)(K_1,K_2,U,r,M,N) two-dimensional (2D) caching-aided ultra-dense network (UDN) with a server containing NN files, K1K2K_1K_2 cache nodes arranged neatly on a grid with K1K_1 rows and K2K_2 columns, and UU cache-less users randomly distributed around cache nodes. Each cache node can cache at most M≤NM\leq N files and has a certain service region by Euclidean distance. The server connects to users through an error-free shared link and the users in the service region of a cache node can freely retrieve all cached contents of this cache node. We aim to design a coded caching scheme for 2D caching-aided UDN systems to reduce the transmission load in the worst case while meeting all possible users' demands. First, we divide all possible users into four classes according to their geographical locations. Then our first order optimal scheme is proposed based on the Maddah-Ali and Niesen scheme. Furthermore, by compressing the transmitted signals of our first scheme based on Maximum Distance Separable (MDS) code, we obtain an improved order optimal scheme with a smaller transmission load.Comment: 44 page
    • …
    corecore