958 research outputs found

    Channel Estimation And Correction Methods For Ofdma Based Lte Downlink System

    Get PDF
    In present era, cellular communication plays a vital role for communicating over long distance. The number of mobile subscribers is increasing tremendously day by day. 3GPP LTE is the evolution of the UMTS in response to ever-increasing demands for high quality multimedia services according to users\u27 expectations. The average data consumption exceeds hundreds of Megabytes per subscriber per month. To introduce, summarize and get acquainted with this new technology LTE is one of the main objectives of my thesis. The Downlink is always considered an important factor in terms of coverage and capacity aspects in between Downlink and Uplink factors for cellular communication. Orthogonal Frequency Division Multiple Access (OFDMA) and Multiple Input Multiple Output (MIMO) are the new technologies which enhance the performance of the traditional wireless communication experience for downlink. In this thesis, we considered the downlink system for channel estimation by using different algorithms and interpolation methods. Channel Estimation algorithms such as Least Squares Estimation (LSE) and Minimum Mean Square Error (MMSE) have been evaluated for different channel models. The interpolation method used in algorithms is Linear, Piecewise constant, Averaged and Pilot averaged. I measured the performance of these algorithms in terms of Bit Error Rate (BER) and Symbol Error Rate (SER). The results are presented to illustrate the salient concept of the LTE communication system

    Spatial Frequency Scheduling for Uplink SC-FDMA based Linearly Precoded LTE Multiuser MIMO Systems

    Get PDF
    This paper investigates the performance of the uplink single carrier (SC) frequency division multiple access (FDMA) based linearly precoded multiuser multiple input multiple output (MIMO) systems with frequency domain packet scheduling. A mathematical expression of the received signal to interference plus noise ratio (SINR) for the studied systems is derived and a utility function based spatial frequency packet scheduling algorithms is investigated. The schedulers are shown to be able to exploit the available multiuser diversity in time, frequency and spatial domains

    Air Interface for Next Generation Mobile Communication Networks: Physical Layer Design:A LTE-A Uplink Case Study

    Get PDF

    Channel Estimation in Uplink of Long Term Evolution

    Get PDF
    Long Term Evolution is considered to be the fastest spreading communication standard in the world.To live up to the increasing demands of higher data rates day by day and higher multimedia services,the existing UMTS system was further upgraded to LTE.To meet their requirements novel technologies are employed in the downlink as well as uplink like Orthogonal Frequency Division Multiple Access (OFDMA) and Single Carrier- Frequency Division Multiple Access (SC-FDMA).For the receiver to perform properly it should be able to recover athe transmittedadata accurately and this is done through channel estimation.Channel Estimation in LTE engages Coherent Detection where a prior knowledge of the channel is required,often known as Channel State Information (CSI).This thesis aims at studying the channel estimation methods used in LTE and evaluate their performance in various multipath models specified by ITU like Pedestrian and Vehicular.The most commonly used channel estimation algorithms are Least Squarea(LS) and Minimum MeanaSquare error (MMSE) algorithms.The performance of these estimators are evaluated in both uplink as well as Downlink in terms of the Bit Error Rate (BER).It was evaluated for OFDMA and then for SC-FDMA,further the performance was assessed in SC-FDMA at first without subcarrier Mapping and after that with subcarrier mapping schemes like Interleaved SC-FDMA (IFDMA) and Localized SC-FDMA (lFDMA).It was found from the results that the MMSE estimator performs better than the LS estimator in both the environments.And the IFDMA has a lower PAPR than LFDMA but LFDMA has a better BER performance

    Datacenter Design for Future Cloud Radio Access Network.

    Full text link
    Cloud radio access network (C-RAN), an emerging cloud service that combines the traditional radio access network (RAN) with cloud computing technology, has been proposed as a solution to handle the growing energy consumption and cost of the traditional RAN. Through aggregating baseband units (BBUs) in a centralized cloud datacenter, C-RAN reduces energy and cost, and improves wireless throughput and quality of service. However, designing a datacenter for C-RAN has not yet been studied. In this dissertation, I investigate how a datacenter for C-RAN BBUs should be built on commodity servers. I first design WiBench, an open-source benchmark suite containing the key signal processing kernels of many mainstream wireless protocols, and study its characteristics. The characterization study shows that there is abundant data level parallelism (DLP) and thread level parallelism (TLP). Based on this result, I then develop high performance software implementations of C-RAN BBU kernels in C++ and CUDA for both CPUs and GPUs. In addition, I generalize the GPU parallelization techniques of the Turbo decoder to the trellis algorithms, an important family of algorithms that are widely used in data compression and channel coding. Then I evaluate the performance of commodity CPU servers and GPU servers. The study shows that the datacenter with GPU servers can meet the LTE standard throughput with 4× to 16× fewer machines than with CPU servers. A further energy and cost analysis show that GPU servers can save on average 13× more energy and 6× more cost. Thus, I propose the C-RAN datacenter be built using GPUs as a server platform. Next I study resource management techniques to handle the temporal and spatial traffic imbalance in a C-RAN datacenter. I propose a “hill-climbing” power management that combines powering-off GPUs and DVFS to match the temporal C-RAN traffic pattern. Under a practical traffic model, this technique saves 40% of the BBU energy in a GPU-based C-RAN datacenter. For spatial traffic imbalance, I propose three workload distribution techniques to improve load balance and throughput. Among all three techniques, pipelining packets has the most throughput improvement at 10% and 16% for balanced and unbalanced loads, respectively.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120825/1/qizheng_1.pd

    LTE-verkon suorituskyvyn parantaminen CDMA2000:sta LTE:hen tehdyn muutoksen jälkeen

    Get PDF
    CDMA2000 technology has been widely used on 450 MHz band. Recently the equipment availability and improved performance offered by LTE has started driving the operators to migrate their networks from CDMA2000 to LTE. The migration may cause the network performance to be in suboptimal state. This thesis presents four methods to positively influence LTE network performance after CDMA2000 to LTE migration, especially on 450 MHz band. Furthermore, three of the four presented methods are evaluated in a live network. The measured three methods were cyclic prefix length, handover parameter optimization and uplink coordinated multipoint (CoMP) transmission. The objective was to determine the effectiveness of each method. The research methods included field measurements and network KPI collection. The results show that normal cyclic prefix length is enough for LTE450 although the cell radius may be up to 50km. Only special cases exist where cyclic prefix should be extended. Operators should consider solving such problems individually instead of widely implementing extended cyclic prefix. Handover parameter optimization turned out to be an important point of attention after CDMA2000 to LTE migration. It was observed that if the handover parameters are not concerned, significant amount of unnecessary handovers may happen. It was evaluated that about 50% of the handovers in the network were unnecessary in the initial situation. By adjusting the handover parameter values 47,28 % of the handovers per user were removed and no negative effects were detected. Coordinated multipoint transmission has been widely discussed to be an effective way to improve LTE network performance, especially at the cell edges. Many challenges must be overcome before it can be applied to downlink. Also, implementing it to function between cells in different eNBs involve challenges. Thus, only intra-site uplink CoMP transmission was tested. The results show that the performance improvements were significant at the cell edges as theory predicted.CDMA2000 teknologiaa on laajalti käytetty 450 MHz:n taajuusalueella. Viime aikoina LTE:n tarjoamat halvemmat laitteistot ja parempi suorituskyky ovat kannustaneet operaattoreita muuttamaan verkkoaan CDMA2000:sta LTE:hen. Kyseinen muutos saattaa johtaa epäoptimaaliseen tilaan verkon suorituskyvyn kannalta. Tämä työ esittelee neljä menetelmää, joilla voidaan positiivisesti vaikuttaa LTE-verkon suorituskykyyn CDMA2000:ste LTE:hen tehdyn muutoksen jälkeen erityisesti 450 MHz:n taajuusalueella. Kolmea näistä menetelmistä arvioidaan tuotantoverkossa. Nämä kolme menetelmää ovat suojavälin pituus, solunvaihtoparametrien optimointi ja ylälinkin koordinoitu monipistetiedonsiirto. Tavoite oli määrittää kunkin menetelmän vaikutus. Tutkimusmenetelmiin kuului kenttämittaukset ja verkon suorituskykymittareiden analyysi. Tutkimustulosten perusteella voidaan sanoa, että normaali suojaväli on riittävän pitkä LTE450:lle vaikka solujen säde on jopa 50km. Vain erikoistapauksissa tarvitaan pidennettyä suojaväliä. Operaattoreiden tulisi ratkaista tällaiset tapaukset yksilöllisesti sen sijaan, että koko verkossa käytettäisiin pidennettyä suojaväliä. Solunvaihtoparametrien optimointi osoittautui tärkeäksi huomion aiheeksi CDMA2000:sta LTE:hen tehdyn muutoksen jälkeen. Turhia solunvaihtoja saattaa tapahtua merkittäviä määriä, mikäli parametreihin ei kiinnitetä huomiota. Lähtötilanteessa noin 50 % testiverkon solunvaihdoista arvioitiin olevan turhia. Solunvaihtoparametreja muuttamalla 47,28 % solunvaihdoista per käyttäjä saatiin poistettua ilman, että mitään haittavaikutuksia olisi huomattu. Koordinoidun monipistetiedonsiirron on laajalti sanottu olevan tehokas tapa parantaa LTE-verkon suorituskykyä, etenkin solujen reunoilla. Monia haasteita pitää ratkaista, enne kuin sitä voidaan käyttää alalinkin tiedonsiirtoon. Lisäksi sen käyttöön eri tukiasemien solujen välillä liittyy haasteita. Tästä syystä monipistetiedonsiirtoa voitiin testata vain ylälinkin suuntaan ja vain yhden tukiaseman välisten solujen kesken. Tulokset osoittivat, että suorituskyky parani merkittävästi solun reunalla

    Impact of Adaptive Modulation and Coding Schemes on Bit Error Rate for System Performance in the Uplink LTE System

    Get PDF
    Long Term Evolution (LTE) is a cellular network technology aims to render enriched data services to users at lower latency and higher (multi-megabit) throughput. The higher system throughput with more reliable transmission is achieved by the support of Adaptive Modulation and Coding (AMC) schemes, scheduling algorithms, multi-antenna techniques etc. The AMC schemes substantially increases the system throughput by reducing the Bit Error Rates (BER) and by adjusting the transmission parameters based on the link quality. The scheduling algorithms also enhance the throughput of individual users, as well as the cell throughput by allocating the resources among the active users. Hence in this paper, an attempt has been made to study and evaluate the effects of AMC schemes such as QPSK, 16-QAM and 64-QAM on uplink LTE system performance for Proportional Fair (PF) and Round Robin (RR) scheduling algorithms using QualNet 7.1 network simulator. The performance metrics considered for the simulation studies are BER, cell throughput, average delay and average jitte
    corecore