15 research outputs found

    Delivery Time Minimization in Edge Caching: Synergistic Benefits of Subspace Alignment and Zero Forcing

    Full text link
    An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, MM transceivers and KK receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with M={1,2}M=\{1,2\} and K={1,2,3}K=\{1,2,3\} that satisfy M+K4M+K\leq 4, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary MM and KK) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment.Comment: submitted to ICC 2018; fixed some typo

    How Much Can D2D Communication Reduce Content Delivery Latency in Fog Networks with Edge Caching?

    Get PDF
    A Fog-Radio Access Network (F-RAN) is studied in which cache-enabled Edge Nodes (ENs) with dedicated fronthaul connections to the cloud aim at delivering contents to mobile users. Using an information-theoretic approach, this work tackles the problem of quantifying the potential latency reduction that can be obtained by enabling Device-to-Device (D2D) communication over out-of-band broadcast links. Following prior work, the Normalized Delivery Time (NDT) --- a metric that captures the high signal-to-noise ratio worst-case latency --- is adopted as the performance criterion of interest. Joint edge caching, downlink transmission, and D2D communication policies based on compress-and-forward are proposed that are shown to be information-theoretically optimal to within a constant multiplicative factor of two for all values of the problem parameters, and to achieve the minimum NDT for a number of special cases. The analysis provides insights on the role of D2D cooperation in improving the delivery latency.Comment: Submitted to the IEEE Transactions on Communication

    Fundamental Limits of Cloud and Cache-Aided Interference Management with Multi-Antenna Edge Nodes

    Get PDF
    In fog-aided cellular systems, content delivery latency can be minimized by jointly optimizing edge caching and transmission strategies. In order to account for the cache capacity limitations at the Edge Nodes (ENs), transmission generally involves both fronthaul transfer from a cloud processor with access to the content library to the ENs, as well as wireless delivery from the ENs to the users. In this paper, the resulting problem is studied from an information-theoretic viewpoint by making the following practically relevant assumptions: 1) the ENs have multiple antennas; 2) only uncoded fractional caching is allowed; 3) the fronthaul links are used to send fractions of contents; and 4) the ENs are constrained to use one-shot linear precoding on the wireless channel. Assuming offline proactive caching and focusing on a high signal-to-noise ratio (SNR) latency metric, the optimal information-theoretic performance is investigated under both serial and pipelined fronthaul-edge transmission modes. The analysis characterizes the minimum high-SNR latency in terms of Normalized Delivery Time (NDT) for worst-case users' demands. The characterization is exact for a subset of system parameters, and is generally optimal within a multiplicative factor of 3/2 for the serial case and of 2 for the pipelined case. The results bring insights into the optimal interplay between edge and cloud processing in fog-aided wireless networks as a function of system resources, including the number of antennas at the ENs, the ENs' cache capacity and the fronthaul capacity.Comment: 34 pages, 15 figures, submitte

    Cache-aided combination networks with interference

    Get PDF
    Centralized coded caching and delivery isstudied for a radio access combination network (RACN),whereby a set ofHedge nodes (ENs), connected to acloud server via orthogonal fronthaul links with limitedcapacity, serve a total ofKuser equipments (UEs) overwireless links. The cloud server is assumed to hold alibrary ofNfiles, each of sizeFbits; and each user,equipped with a cache of sizeμRNFbits, is connectedto a distinct set ofrENs each of which equipped witha cache of sizeμTNFbits, whereμT,μR∈[0,1]arethe fractional cache capacities of the UEs and the ENs,respectively. The objective is to minimize the normalizeddelivery time (NDT), which refers to the worst case deliverylatency when each user requests a single distinct file fromthe library. Three coded caching and transmission schemesare considered, namely theMDS-IA,soft-transferandzero-forcing (ZF)schemes. MDS-IA utilizes maximum distanceseparable (MDS) codes in the placement phase and realinterference alignment (IA) in the delivery phase. Theachievable NDT for this scheme is presented forr= 2and arbitrary fractional cache sizesμTandμR, and alsofor arbitrary value ofrand fractional cache sizeμTwhen the cache capacity of the UE is above a certainthreshold. The soft-transfer scheme utilizes soft-transferof coded symbols to ENs that implement ZF over the edgelinks. The achievable NDT for this scheme is presentedfor arbitraryrand arbitrary fractional cache sizesμTandμR. The last scheme utilizes ZF between the ENs andthe UEs without the participation of the cloud server inthe delivery phase. The achievable NDT for this scheme is presented for an arbitrary value ofrwhen the totalcache size at a pair of UE and EN is sufficient to store thewhole library, i.e.,μT+μR≥1. The results indicate thatthe fronthaul capacity determines which scheme achievesa better performance in terms of the NDT, and thesoft-transfer scheme becomes favorable as the fronthaulcapacity increases

    Spectrum Sharing, Latency, and Security in 5G Networks with Application to IoT and Smart Grid

    Get PDF
    The surge of mobile devices, such as smartphones, and tables, demands additional capacity. On the other hand, Internet-of-Things (IoT) and smart grid, which connects numerous sensors, devices, and machines require ubiquitous connectivity and data security. Additionally, some use cases, such as automated manufacturing process, automated transportation, and smart grid, require latency as low as 1 ms, and reliability as high as 99.99\%. To enhance throughput and support massive connectivity, sharing of the unlicensed spectrum (3.5 GHz, 5GHz, and mmWave) is a potential solution. On the other hand, to address the latency, drastic changes in the network architecture is required. The fifth generation (5G) cellular networks will embrace the spectrum sharing and network architecture modifications to address the throughput enhancement, massive connectivity, and low latency. To utilize the unlicensed spectrum, we propose a fixed duty cycle based coexistence of LTE and WiFi, in which the duty cycle of LTE transmission can be adjusted based on the amount of data. In the second approach, a multi-arm bandit learning based coexistence of LTE and WiFi has been developed. The duty cycle of transmission and downlink power are adapted through the exploration and exploitation. This approach improves the aggregated capacity by 33\%, along with cell edge and energy efficiency enhancement. We also investigate the performance of LTE and ZigBee coexistence using smart grid as a scenario. In case of low latency, we summarize the existing works into three domains in the context of 5G networks: core, radio and caching networks. Along with this, fundamental constraints for achieving low latency are identified followed by a general overview of exemplary 5G networks. Besides that, a loop-free, low latency and local-decision based routing protocol is derived in the context of smart grid. This approach ensures low latency and reliable data communication for stationary devices. To address data security in wireless communication, we introduce a geo-location based data encryption, along with node authentication by k-nearest neighbor algorithm. In the second approach, node authentication by the support vector machine, along with public-private key management, is proposed. Both approaches ensure data security without increasing the packet overhead compared to the existing approaches
    corecore