1,106 research outputs found

    Enabling RAN Slicing Through Carrier Aggregation in mmWave Cellular Networks

    Full text link
    The ever increasing number of connected devices and of new and heterogeneous mobile use cases implies that 5G cellular systems will face demanding technical challenges. For example, Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB) scenarios present orthogonal Quality of Service (QoS) requirements that 5G aims to satisfy with a unified Radio Access Network (RAN) design. Network slicing and mmWave communications have been identified as possible enablers for 5G. They provide, respectively, the necessary scalability and flexibility to adapt the network to each specific use case environment, and low latency and multi-gigabit-per-second wireless links, which tap into a vast, currently unused portion of the spectrum. The optimization and integration of these technologies is still an open research challenge, which requires innovations at different layers of the protocol stack. This paper proposes to combine them in a RAN slicing framework for mmWaves, based on carrier aggregation. Notably, we introduce MilliSlice, a cross-carrier scheduling policy that exploits the diversity of the carriers and maximizes their utilization, thus simultaneously guaranteeing high throughput for the eMBB slices and low latency and high reliability for the URLLC flows.Comment: 8 pages, 8 figures. Proc. of the 18th Mediterranean Communication and Computer Networking Conference (MedComNet 2020), Arona, Italy, 202

    Novel packet scheduling algorithm based on cross component carrier in LTE-advanced network with carrier aggregation

    Get PDF
    LTE-Advanced provides considerably higher data rates than the early releases of LTE. The carrier aggregation CA) technology allows scalable expansion of effective bandwidth provided to user equipment (UE) through simultaneous utilization of radio resources across multiple carriers. In this paper we propose a new packet scheduling (PS) criterion algorithm that charmingly satisfies the fairness among the different kinds of UEs by designing a weighting factor to proportional fair (PF) packet scheduling (PS) algorithms, while enhancing their throughput performance. The proposed PS algorithm is implemented and validated in a PS module for LTE\LTE-Advanced via system level simulations. Results show that PS-modified algorithms achieve higher throughput for both LTE and LTE-Advanced UEs

    Resource and power management in next generation networks

    Get PDF
    The limits of today’s cellular communication systems are constantly being tested by the exponential increase in mobile data traffic, a trend which is poised to continue well into the next decade. Densification of cellular networks, by overlaying smaller cells, i.e., micro, pico and femtocells, over the traditional macrocell, is seen as an inevitable step in enabling future networks to support the expected increases in data rate demand. Next generation networks will most certainly be more heterogeneous as services will be offered via various types of points of access (PoAs). Indeed, besides the traditional macro base station, it is expected that users will also be able to access the network through a wide range of other PoAs: WiFi access points, remote radio-heads (RRHs), small cell (i.e., micro, pico and femto) base stations or even other users, when device-to-device (D2D) communications are supported, creating thus a multi-tiered network architecture. This approach is expected to enhance the capacity of current cellular networks, while patching up potential coverage gaps. However, since available radio resources will be fully shared, the inter-cell interference as well as the interference between the different tiers will pose a significant challenge. To avoid severe degradation of network performance, properly managing the interference is essential. In particular, techniques that mitigate interference such Inter Cell Interference Coordination (ICIC) and enhanced ICIC (eICIC) have been proposed in the literature to address the issue. In this thesis, we argue that interference may be also addressed during radio resource scheduling tasks, by enabling the network to make interference-aware resource allocation decisions. Carrier aggregation technology, which allows the simultaneous use of several component carriers, on the other hand, targets the lack of sufficiently large portions of frequency spectrum; a problem that severely limits the capacity of wireless networks. The aggregated carriers may, in general, belong to different frequency bands, and have different bandwidths, thus they also may have very different signal propagation characteristics. Integration of carrier aggregation in the network introduces additional tasks and further complicates interference management, but also opens up a range of possibilities for improving spectrum efficiency in addition to enhancing capacity, which we aim to exploit. In this thesis, we first look at the resource allocation in problem in dense multitiered networks with support for advanced features such as carrier aggregation and device-to-device communications. For two-tiered networks with D2D support, we propose a centralised, near optimal algorithm, based on dynamic programming principles, that allows a central scheduler to make interference and traffic-aware scheduling decisions, while taking into consideration the short-lived nature of D2D links. As the complexity of the central scheduler increases exponentially with the number of component carriers, we further propose a distributed heuristic algorithm to tackle the resource allocation problem in carrier aggregation enabled dense networks. We show that the solutions we propose perform significantly better than standard solutions adopted in cellular networks such as eICIC coupled with Proportional Fair scheduling, in several key metrics such as user throughput, timely delivery of content and spectrum and energy efficiency, while ensuring fairness for backward compatible devices. Next, we investigate the potentiality to enhance network performance by enabling the different nodes of the network to reduce and dynamically adjust the transmit power of the different carriers to mitigate interference. Considering that the different carriers may have different coverage areas, we propose to leverage this diversity, to obtain high-performing network configurations. Thus, we model the problem of carrier downlink transmit power setting, as a competitive game between teams of PoAs, which enables us to derive distributed dynamic power setting algorithms. Using these algorithms we reach stable configurations in the network, known as Nash equilibria, which we show perform significantly better than fixed power strategies coupled with eICIC

    A Study on Cross-Carrier Scheduler for Carrier Aggregation in Beyond 5G Networks

    Get PDF
    Carrier Aggregation (CA) allows the network and User Equipment (UE) to aggregate carrier frequencies in licensed, unlicensed, or Shared Access (SA) bands of the same or different spectrum bands to boost the achieved data rates. This work aims to provide a detailed study on CA techniques for 5G New Radio (5G NR) networks while elaborating on CA deployment scenarios, CA-enabled 5G networks, and radio resource management and scheduling techniques. We analyze cross-carrier scheduling schemes in CA-enabled 5G networks for Downlink (DL) resource allocation. The requirements, challenges, and opportunities in allocating Resource Blocks (RBs) and Component Carriers (CCs) are addressed. The study and analysis of various multi-band scheduling techniques are made while maintaining that high throughput and reduced power usage must be achieved at the UE. Finally, we present CA as the critical enabler to advanced systems while discussing how it meets the demands and holds the potential to support beyond 5G networks, followed by discussing open issues in resource allocation and scheduling techniques.This work was supported by FCT/MCTES through national funds and, when applicable, cofounded EU funds under the project UIDB/50008/2020, ORCIP (22141-01/SAICT/2016), COST CA 20120 INTERACT, SNF Scientific Exchange - AISpectrum (project 205842) and TeamUp5G. TeamUp5G has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie ETN TeamUp5G, grant agreement No. 813391.info:eu-repo/semantics/publishedVersio

    Packet Scheduling Algorithms in LTE/LTE-A cellular Networks: Multi-agent Q-learning Approach

    Get PDF
    Spectrum utilization is vital for mobile operators. It ensures an efficient use of spectrum bands, especially when obtaining their license is highly expensive. Long Term Evolution (LTE), and LTE-Advanced (LTE-A) spectrum bands license were auctioned by the Federal Communication Commission (FCC) to mobile operators with hundreds of millions of dollars. In the first part of this dissertation, we study, analyze, and compare the QoS performance of QoS-aware/Channel-aware packet scheduling algorithms while using CA over LTE, and LTE-A heterogeneous cellular networks. This included a detailed study of the LTE/LTE-A cellular network and its features, and the modification of an open source LTE simulator in order to perform these QoS performance tests. In the second part of this dissertation, we aim to solve spectrum underutilization by proposing, implementing, and testing two novel multi-agent Q-learning-based packet scheduling algorithms for LTE cellular network. The Collaborative Competitive scheduling algorithm, and the Competitive Competitive scheduling algorithm. These algorithms schedule licensed users over the available radio resources and un-licensed users over spectrum holes. In conclusion, our results show that the spectrum band could be utilized by deploying efficient packet scheduling algorithms for licensed users, and can be further utilized by allowing unlicensed users to be scheduled on spectrum holes whenever they occur

    Scheduling for Multi-Camera Surveillance in LTE Networks

    Full text link
    Wireless surveillance in cellular networks has become increasingly important, while commercial LTE surveillance cameras are also available nowadays. Nevertheless, most scheduling algorithms in the literature are throughput, fairness, or profit-based approaches, which are not suitable for wireless surveillance. In this paper, therefore, we explore the resource allocation problem for a multi-camera surveillance system in 3GPP Long Term Evolution (LTE) uplink (UL) networks. We minimize the number of allocated resource blocks (RBs) while guaranteeing the coverage requirement for surveillance systems in LTE UL networks. Specifically, we formulate the Camera Set Resource Allocation Problem (CSRAP) and prove that the problem is NP-Hard. We then propose an Integer Linear Programming formulation for general cases to find the optimal solution. Moreover, we present a baseline algorithm and devise an approximation algorithm to solve the problem. Simulation results based on a real surveillance map and synthetic datasets manifest that the number of allocated RBs can be effectively reduced compared to the existing approach for LTE networks.Comment: 9 pages, 10 figure

    Design And Analysis Of Modified-Proportional Fair Scheduler For LTELTE-Advanced

    Get PDF
    Nowadays, Long Term Evolution-Advanced (LTE-Advanced) is well known as a cellular network that can support very high data rates in diverse traffic conditions. One of the key components of Orthogonal Frequency-Division Multiple Access (OFDMA), Radio Resource Management (RRM), is critical in achieving the desired performance by managing key components of both PHY and MAC layers. The technique that can be done to achieve this is through packet scheduling which is the key scheme of RRM for LTE traffic processing whose function is to allocate resources for both frequency and time dimensions. Packet scheduling for LTE-Advanced has been a dynamic research area in recent years, because in evidence, the increasing demands of data services and number of users which is likely to explode the progress of the LTE system traffic. However, the existing scheduling system is increasingly congested with the increasing number of users and requires the new scheduling system to ensure a more efficient data transmission. In LTE system, Round Robin (RR) scheduler has a problem in providing a high data rate to User Equipment’s (UEs). This is because some resources will be wasted because it schedules the resources from/ to UEs while the UEs are suffering from severe deep fading and less than the required threshold. Meanwhile, for Proportional Fair (PF) scheduler, the process of maximizing scheme of data rate could be very unfair and UE that experienced a bad channel quality conditions can be starved. So, the mechanism applied in PF scheduler is to weight the current data rate achievable by a UE by the average rate received by a UE. The main contribution of this study is the design of a new scheduling scheme and its performance is compared with the PF and RR downlink schedulers for LTE by utilizing the LTE Downlink System Level Simulator. The proposed new scheduling algorithm, namely the Modified-PF scheduler, divides a single sub-frame into multiple time slots and allocates the resource block (RB) to the targeted UE in all time slots for each sub-frame based on the instantaneous Channel Quality Indicator (CQI) feedback received from UEs. Besides, the proposed scheduler is also capable to reallocate RB cyclically in turn to target UE within a time slot in order to ensure the process of distributing packet data consistently. The simulation results showed that the Modified-PF scheduler provided the best performance in terms of throughput in the range of up to 90% improvement and almost 40% increment for spectral efficiency with comparable fairness as compared to PF and RR schedulers. Although PF scheduler had the best fairness index, the Modified-PF scheduler provided a better compromise between the throughput in /spectral efficiency and fairness. This showed that the newly proposed scheme improved the LTE output performances while at the same time maintained a minimal required fairness among the UEs
    corecore