1,287 research outputs found

    Performance analysis of carrier aggregation for various mobile network implementations scenario based on spectrum allocated

    Full text link
    Carrier Aggregation (CA) is one of the Long Term Evolution Advanced (LTE-A) features that allow mobile network operators (MNO) to combine multiple component carriers (CCs) across the available spectrum to create a wider bandwidth channel for increasing the network data throughput and overall capacity. CA has a potential to enhance data rates and network performance in the downlink, uplink, or both, and it can support aggregation of frequency division duplexing (FDD) as well as time division duplexing (TDD). The technique enables the MNO to exploit fragmented spectrum allocations and can be utilized to aggregate licensed and unlicensed carrier spectrum as well. This paper analyzes the performance gains and complexity level that arises from the aggregation of three inter-band component carriers (3CC) as compared to the aggregation of 2CC using a Vienna LTE System Level simulator. The results show a considerable growth in the average cell throughput when 3CC aggregations are implemented over the 2CC aggregation, at the expense of reduction in the fairness index. The reduction in the fairness index implies that, the scheduler has an increased task in resource allocations due to the added component carrier. Compensating for such decrease in the fairness index could result into scheduler design complexity. The proposed scheme can be adopted in combining various component carriers, to increase the bandwidth and hence the data rates.Comment: 13 page

    Next Generation High Throughput Satellite System

    Get PDF
    This paper aims at presenting an overview of the state-of-the-art in High Throughput Satellite (HTS) systems for Fixed Satellite Services (FSS) and High Density-FSS. Promising techniques and innovative strategies that can enhance system performance are reviewed and analyzed aiming to show what to expect for next generation ultra-high capacity satellite systems. Potential air interface evolutions, efficient frequency plans,feeder link dimensioning strategies and interference cancellation techniques are presented to show how Terabit/s satellite myth may turn into reality real soon

    Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks

    Get PDF
    Soaring capacity and coverage demands dictate that future cellular networks need to soon migrate towards ultra-dense networks. However, network densification comes with a host of challenges that include compromised energy efficiency, complex interference management, cumbersome mobility management, burdensome signaling overheads and higher backhaul costs. Interestingly, most of the problems, that beleaguer network densification, stem from legacy networks' one common feature i.e., tight coupling between the control and data planes regardless of their degree of heterogeneity and cell density. Consequently, in wake of 5G, control and data planes separation architecture (SARC) has recently been conceived as a promising paradigm that has potential to address most of aforementioned challenges. In this article, we review various proposals that have been presented in literature so far to enable SARC. More specifically, we analyze how and to what degree various SARC proposals address the four main challenges in network densification namely: energy efficiency, system level capacity maximization, interference management and mobility management. We then focus on two salient features of future cellular networks that have not yet been adapted in legacy networks at wide scale and thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and device-to-device (D2D) communications. After providing necessary background on CoMP and D2D, we analyze how SARC can particularly act as a major enabler for CoMP and D2D in context of 5G. This article thus serves as both a tutorial as well as an up to date survey on SARC, CoMP and D2D. Most importantly, the article provides an extensive outlook of challenges and opportunities that lie at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201

    Design And Analysis Of Modified-Proportional Fair Scheduler For LTELTE-Advanced

    Get PDF
    Nowadays, Long Term Evolution-Advanced (LTE-Advanced) is well known as a cellular network that can support very high data rates in diverse traffic conditions. One of the key components of Orthogonal Frequency-Division Multiple Access (OFDMA), Radio Resource Management (RRM), is critical in achieving the desired performance by managing key components of both PHY and MAC layers. The technique that can be done to achieve this is through packet scheduling which is the key scheme of RRM for LTE traffic processing whose function is to allocate resources for both frequency and time dimensions. Packet scheduling for LTE-Advanced has been a dynamic research area in recent years, because in evidence, the increasing demands of data services and number of users which is likely to explode the progress of the LTE system traffic. However, the existing scheduling system is increasingly congested with the increasing number of users and requires the new scheduling system to ensure a more efficient data transmission. In LTE system, Round Robin (RR) scheduler has a problem in providing a high data rate to User Equipment’s (UEs). This is because some resources will be wasted because it schedules the resources from/ to UEs while the UEs are suffering from severe deep fading and less than the required threshold. Meanwhile, for Proportional Fair (PF) scheduler, the process of maximizing scheme of data rate could be very unfair and UE that experienced a bad channel quality conditions can be starved. So, the mechanism applied in PF scheduler is to weight the current data rate achievable by a UE by the average rate received by a UE. The main contribution of this study is the design of a new scheduling scheme and its performance is compared with the PF and RR downlink schedulers for LTE by utilizing the LTE Downlink System Level Simulator. The proposed new scheduling algorithm, namely the Modified-PF scheduler, divides a single sub-frame into multiple time slots and allocates the resource block (RB) to the targeted UE in all time slots for each sub-frame based on the instantaneous Channel Quality Indicator (CQI) feedback received from UEs. Besides, the proposed scheduler is also capable to reallocate RB cyclically in turn to target UE within a time slot in order to ensure the process of distributing packet data consistently. The simulation results showed that the Modified-PF scheduler provided the best performance in terms of throughput in the range of up to 90% improvement and almost 40% increment for spectral efficiency with comparable fairness as compared to PF and RR schedulers. Although PF scheduler had the best fairness index, the Modified-PF scheduler provided a better compromise between the throughput in /spectral efficiency and fairness. This showed that the newly proposed scheme improved the LTE output performances while at the same time maintained a minimal required fairness among the UEs

    Meeting IMT 2030 Performance Targets: The Potential of OTFDM Waveform and Structural MIMO Technologies

    Full text link
    The white paper focuses on several candidate technologies that could play a crucial role in the development of 6G systems. Two of the key technologies explored in detail are Orthogonal Time Frequency Division Multiplexing (OTFDM) waveform and Structural MIMO (S-MIMO)
    • …
    corecore