11 research outputs found

    Reduced complexity multicast beamforming and group assignment schemes for multi-antenna coded caching

    Get PDF
    Abstract. In spite of recent advancements in wireless communication technologies and data delivery networks, it is unlikely that the speeds supported by these networks will be able to keep up with the exponentially increasing demand caused by the widespread adoption of high-speed and large-data applications. One appealing idea proposed to address this issue is coded caching, which is an innovative data delivery technique that makes use of the network’s aggregate cache rather than the individual memory available to each user. This proposed idea of coded caching helps boost the data rates by distributing cache material throughout the network and delivering independent content to many users at a time. Despite the original theoretical promises for large caching gains, in reality, coded caching suffers from severe bottlenecks that dramatically limit these gains. Some of these bottlenecks are requiring complex successive interference cancellation (SIC) at the receiver, exponential increase in subpacketization, applicability to a limited range of input parameters, and performance losses in low- and mid- signal to noise ratio (SNR) regimes. In this study, we present a novel coded caching scheme based on user grouping for cache-aided multi-input single-output (MISO) networks. One special property of this new scheme is its applicability to every set of input values for the user count (KK), transmitter-side antenna count (LL), and the global coded caching gain (tt). Moreover, for a fixed tt, this scheme can achieve theoretical sum-DoF optimality with no limitations. This strategy yields superior performance in terms of subpacketization when input parameters satisfy t+Lt+1∈N\frac{t+L}{t+1} \in \mathbb{N}. This performance boost is enabled by the underlying user grouping structure during data delivery. However, when input parameters do not comply with t+Lt+1\frac{t+L}{t+1} ∈N\in \mathbb{N}, in order to guarantee symmetry of the scheme and optimal DoF, multicast and unicast messages need to be constructed using a tree diagram, resulting in excess subpacketization and transmission count. Nevertheless, the simple receiver structure without the SIC requirement not only simplifies the implementation complexity but also enables us to use state-of-the-art methods to readily design optimized transmit beamformers maximizing the achievable symmetric rate. Finally, we use numerical analysis to compare our new proposed scheme with well-known coded caching schemes in the literature

    The Role of Caching in Future Communication Systems and Networks

    Get PDF
    This paper has the following ambitious goal: to convince the reader that content caching is an exciting research topic for the future communication systems and networks. Caching has been studied for more than 40 years, and has recently received increased attention from industry and academia. Novel caching techniques promise to push the network performance to unprecedented limits, but also pose significant technical challenges. This tutorial provides a brief overview of existing caching solutions, discusses seminal papers that open new directions in caching, and presents the contributions of this special issue. We analyze the challenges that caching needs to address today, also considering an industry perspective, and identify bottleneck issues that must be resolved to unleash the full potential of this promising technique

    A White Paper on Broadband Connectivity in 6G

    Get PDF
    Executive Summary This white paper explores the road to implementing broadband connectivity in future 6G wireless systems. Different categories of use cases are considered, from extreme capacity with peak data rates up to 1 Tbps, to raising the typical data rates by orders-of-magnitude, to support broadband connectivity at railway speeds up to 1000 km/h. To achieve these goals, not only the terrestrial networks will be evolved but they will also be integrated with satellite networks, all facilitating autonomous systems and various interconnected structures. We believe that several categories of enablers at the infrastructure, spectrum, and protocol/algorithmic levels are required to realize the intended broadband connectivity goals in 6G. At the infrastructure level, we consider ultra-massive MIMO technology (possibly implemented using holographic radio), intelligent reflecting surfaces, user-centric and scalable cell-free networking, integrated access and backhaul, and integrated space and terrestrial networks. At the spectrum level, the network must seamlessly utilize sub-6 GHz bands for coverage and spatial multiplexing of many devices, while higher bands will be used for pushing the peak rates of point-to-point links. The latter path will lead to THz communications complemented by visible light communications in specific scenarios. At the protocol/algorithmic level, the enablers include improved coding, modulation, and waveforms to achieve lower latencies, higher reliability, and reduced complexity. Different options will be needed to optimally support different use cases. The resource efficiency can be further improved by using various combinations of full-duplex radios, interference management based on rate-splitting, machine-learning-based optimization, coded caching, and broadcasting. Finally, the three levels of enablers must be utilized not only to deliver better broadband services in urban areas, but also to provide full-coverage broadband connectivity must be one of the key outcomes of 6G

    Fundamental Limits of Caching: Symmetry Structure and Coded Placement Schemes

    Get PDF
    Caching is a technique to reduce the communication load in peak hours by prefetching contents during off-peak hours. In 2014, Maddah-Ali and Niesen introduced a framework for coded caching, and showed that significant improvement can be obtained compared to uncoded caching. Considerable efforts have been devoted to identify the precise information theoretic fundamental limit of such systems, however the difficulty of this task has also become clear. One of the reasons for this difficulty is that the original coded caching setting allows multiple demand types during delivery, which in fact introduces tension in the coding strategy to accommodate all of them. We seek to develop a better understanding of the fundamental limit of coded caching. In order to characterize the fundamental limit of the tradeoff between the amount of cache memory and the delivery transmission rate of multiuser caching systems, various coding schemes have been proposed in the literature. These schemes can largely be categorized into two classes, namely uncoded prefetching schemes and coded prefetching schemes. While uncoded prefetching schemes in general over order-wise optimal performance, coded prefetching schemes often have better performance at the low cache memory regime. At first sight it seems impossible to connect these two different types of coding schemes, yet finding a unified coding scheme that achieves the optimal memory-rate tradeoff is an important and interesting problem. We take the first step on this direction and provide a connection between the uncoded prefetching scheme proposed by Maddah Ali and Niesen (and its improved version by Yu et al.) and the coded prefetching scheme proposed by Tian and Chen. The intermediate operating points of this general scheme can in fact provide new memory-rate tradeoff points previously not known to be achievable in the literature. This new general coding scheme is then presented and analyzed rigorously, which yields a new inner bound to the memory-rate tradeoff for the caching problem. While studying the general case can be difficult, we found that studying the single demand type systems will provide important insights. Motivated by these findings, we focus on systems where the number of users and the number of files are the same, and the demand type is when all files are being requested. A novel coding scheme is proposed, which provides several optimal memory transmission operating points. Outer bounds for this class of systems are also considered, and their relation with existing bounds is discussed. Outer-bounding the fundamental limits of coded caching problem is difficult, not only because there are tons of information inequalities and problem specific equalities to choose from, but also because of identifying a useful subset (and often a quite small subset) from them and how to combine them to produce an improved outerbound is a hard problem. Information inequalities can be used to derive the fundamental limits of information systems. Many information inequalities and problem-specific constraints are linear equalities or inequalities of joint entropies, and thus outer bounding the fundamental limits can be viewed as and in principle computed through linear programming. However, for many practical engineering problems, the resultant linear program (LP) is very large, rendering such a computational approach almost completely inapplicable in practice. We provide a method to pinpoint this reduction by counting the number of orbits induced by the symmetry on the set of the LP variables and the LP constraints, respectively. We proposed a generic three-layer decomposition of the group structures for this purpose. This general approach can also be applied to various other problems such as extremal pairwise cyclically symmetric entropy inequalities and the regenerating code problem. Decentralized coded caching is applicable in scenarios when the server is uninformed of the number of active users and their identities in a wireless or mobile environment. We propose a decentralized coded prefetching strategy where both prefetching and delivery are coded. The proposed strategy indeed outperforms the existing decentralized uncoded caching strategy in regimes of small cache size when the numbers of files is less than the number of users. Methods to manage the coding overhead are further suggested
    corecore