4,312 research outputs found

    When Backpressure Meets Predictive Scheduling

    Full text link
    Motivated by the increasing popularity of learning and predicting human user behavior in communication and computing systems, in this paper, we investigate the fundamental benefit of predictive scheduling, i.e., predicting and pre-serving arrivals, in controlled queueing systems. Based on a lookahead window prediction model, we first establish a novel equivalence between the predictive queueing system with a \emph{fully-efficient} scheduling scheme and an equivalent queueing system without prediction. This connection allows us to analytically demonstrate that predictive scheduling necessarily improves system delay performance and can drive it to zero with increasing prediction power. We then propose the \textsf{Predictive Backpressure (PBP)} algorithm for achieving optimal utility performance in such predictive systems. \textsf{PBP} efficiently incorporates prediction into stochastic system control and avoids the great complication due to the exponential state space growth in the prediction window size. We show that \textsf{PBP} can achieve a utility performance that is within O(ϵ)O(\epsilon) of the optimal, for any ϵ>0\epsilon>0, while guaranteeing that the system delay distribution is a \emph{shifted-to-the-left} version of that under the original Backpressure algorithm. Hence, the average packet delay under \textsf{PBP} is strictly better than that under Backpressure, and vanishes with increasing prediction window size. This implies that the resulting utility-delay tradeoff with predictive scheduling beats the known optimal [O(ϵ),O(log(1/ϵ))][O(\epsilon), O(\log(1/\epsilon))] tradeoff for systems without prediction

    Enabling RAN Slicing Through Carrier Aggregation in mmWave Cellular Networks

    Full text link
    The ever increasing number of connected devices and of new and heterogeneous mobile use cases implies that 5G cellular systems will face demanding technical challenges. For example, Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB) scenarios present orthogonal Quality of Service (QoS) requirements that 5G aims to satisfy with a unified Radio Access Network (RAN) design. Network slicing and mmWave communications have been identified as possible enablers for 5G. They provide, respectively, the necessary scalability and flexibility to adapt the network to each specific use case environment, and low latency and multi-gigabit-per-second wireless links, which tap into a vast, currently unused portion of the spectrum. The optimization and integration of these technologies is still an open research challenge, which requires innovations at different layers of the protocol stack. This paper proposes to combine them in a RAN slicing framework for mmWaves, based on carrier aggregation. Notably, we introduce MilliSlice, a cross-carrier scheduling policy that exploits the diversity of the carriers and maximizes their utilization, thus simultaneously guaranteeing high throughput for the eMBB slices and low latency and high reliability for the URLLC flows.Comment: 8 pages, 8 figures. Proc. of the 18th Mediterranean Communication and Computer Networking Conference (MedComNet 2020), Arona, Italy, 202

    A low complexity resource allocation algorithm for multicast service delivery in OFDMA networks

    Get PDF
    Allocating and managing radio resources to multicast transmissions in Orthogonal Frequency-Division Multiple Access (OFDMA) systems is the challenging research issue addressed by this paper. A subgrouping technique, which divides the subscribers into subgroups according to the experienced channel quality, is considered to overcome the throughput limitations of conventional multicast data delivery schemes. A low complexity algorithm, designed to work with different resource allocation strategies, is also proposed to reduce the computational complexity of the subgroup formation problem. Simulation results, carried out by considering the Long Term Evolution (LTE) system based on OFDMA, testify the effectiveness of the proposed solution, which achieves a near-optimal performance with a limited computational load for the system

    On Asymptotic Optimality of Dual Scheduling Algorithm In A Generalized Switch

    Get PDF
    Generalized switch is a model of a queueing system where parallel servers are interdependent and have time-varying service capabilities. This paper considers the dual scheduling algorithm that uses rate control and queue-length based scheduling to allocate resources for a generalized switch. We consider a saturated system in which each user has infinite amount of data to be served. We prove the asymptotic optimality of the dual scheduling algorithm for such a system, which says that the vector of average service rates of the scheduling algorithm maximizes some aggregate concave utility functions. As the fairness objectives can be achieved by appropriately choosing utility functions, the asymptotic optimality establishes the fairness properties of the dual scheduling algorithm. The dual scheduling algorithm motivates a new architecture for scheduling, in which an additional queue is introduced to interface the user data queue and the time-varying server and to modulate the scheduling process, so as to achieve different performance objectives. Further research would include scheduling with Quality of Service guarantees with the dual scheduler, and its application and implementation in various versions of the generalized switch model

    An Efficient Uplink Multi-Connectivity Scheme for 5G mmWave Control Plane Applications

    Full text link
    The millimeter wave (mmWave) frequencies offer the potential of orders of magnitude increases in capacity for next-generation cellular systems. However, links in mmWave networks are susceptible to blockage and may suffer from rapid variations in quality. Connectivity to multiple cells - at mmWave and/or traditional frequencies - is considered essential for robust communication. One of the challenges in supporting multi-connectivity in mmWaves is the requirement for the network to track the direction of each link in addition to its power and timing. To address this challenge, we implement a novel uplink measurement system that, with the joint help of a local coordinator operating in the legacy band, guarantees continuous monitoring of the channel propagation conditions and allows for the design of efficient control plane applications, including handover, beam tracking and initial access. We show that an uplink-based multi-connectivity approach enables less consuming, better performing, faster and more stable cell selection and scheduling decisions with respect to a traditional downlink-based standalone scheme. Moreover, we argue that the presented framework guarantees (i) efficient tracking of the user in the presence of the channel dynamics expected at mmWaves, and (ii) fast reaction to situations in which the primary propagation path is blocked or not available.Comment: Submitted for publication in IEEE Transactions on Wireless Communications (TWC
    corecore