35 research outputs found

    A Robust Maximum Likelihood Scheme for PSS Detection and Integer Frequency Offset Recovery in LTE Systems

    Get PDF
    Before establishing a communication link in a cellular network, the user terminal must activate a synchronization procedure called initial cell search in order to acquire specific information about the serving base station. To accomplish this task, the primary synchronization signal (PSS) and secondary synchronization signal (SSS) are periodically transmitted in the downlink of a long term evolution (LTE) network. Since SSS detection can be performed only after successful identification of the primary signal, in this work, we present a novel algorithm for joint PSS detection, sector index identification, and integer frequency offset (IFO) recovery in an LTE system. The proposed scheme relies on the maximum likelihood (ML) estimation criterion and exploits a suitable reduced-rank representation of the channel frequency response, which proves robust against multipath distortions and residual timing errors. We show that a number of PSS detection methods that were originally introduced through heuristic reasoning can be derived from our ML framework by simply selecting an appropriate model for the channel gains over the PSS subcarriers. Numerical simulations indicate that the proposed scheme can be effectively applied in the presence of severe multipath propagation, where existing alternatives provide unsatisfactory performance

    Timing and Carrier Synchronization in Wireless Communication Systems: A Survey and Classification of Research in the Last 5 Years

    Get PDF
    Timing and carrier synchronization is a fundamental requirement for any wireless communication system to work properly. Timing synchronization is the process by which a receiver node determines the correct instants of time at which to sample the incoming signal. Carrier synchronization is the process by which a receiver adapts the frequency and phase of its local carrier oscillator with those of the received signal. In this paper, we survey the literature over the last 5 years (2010–2014) and present a comprehensive literature review and classification of the recent research progress in achieving timing and carrier synchronization in single-input single-output (SISO), multiple-input multiple-output (MIMO), cooperative relaying, and multiuser/multicell interference networks. Considering both single-carrier and multi-carrier communication systems, we survey and categorize the timing and carrier synchronization techniques proposed for the different communication systems focusing on the system model assumptions for synchronization, the synchronization challenges, and the state-of-the-art synchronization solutions and their limitations. Finally, we envision some future research directions

    CELLULAR-ENABLED MACHINE TYPE COMMUNICATIONS: RECENT TECHNOLOGIES AND COGNITIVE RADIO APPROACHES

    Get PDF
    The scarcity of bandwidth has always been the main obstacle for providing reliable high data-rate wireless links, which are in great demand to accommodate nowadays and immediate future wireless applications. In addition, recent reports have showed inefficient usage and under-utilization of the available bandwidth. Cognitive radio (CR) has recently emerged as a promising solution to enhance the spectrum utilization, where it offers the ability for unlicensed users to access the licensed spectrum opportunistically. By allowing opportunistic spectrum access which is the main concept for the interweave network model, the overall spectrum utilization can be improved. This requires cognitive radio networks (CRNs) to consider the spectrum sensing and monitoring as an essential enabling process for the interweave network model. Machine-to-machine (M2M) communication, which is the basic enabler for the Internet-of-Things (IoT), has emerged to be a key element in future networks. Machines are expected to communicate with each other exchanging information and data without human intervention. The ultimate objective of M2M communications is to construct comprehensive connections among all machines distributed over an extensive coverage area. Due to the radical change in the number of users, the network has to carefully utilize the available resources in order to maintain reasonable quality-of-service (QoS). Generally, one of the most important resources in wireless communications is the frequency spectrum. To utilize the frequency spectrum in IoT environment, it can be argued that cognitive radio concept is a possible solution from the cost and performance perspectives. Thus, supporting numerous number of machines is possible by employing dual-mode base stations which can apply cognitive radio concept in addition to the legacy licensed frequency assignment. In this thesis, a detailed review of the state of the art related to the application of spectrum sensing in CR communications is considered. We present the latest advances related to the implementation of the legacy spectrum sensing approaches. We also address the implementation challenges for cognitive radios in the direction of spectrum sensing and monitoring. We propose a novel algorithm to solve the reduced throughput issue due to the scheduled spectrum sensing and monitoring. Further, two new architectures are considered to significantly reduce the power consumption required by the CR to enable wideband sensing. Both systems rely on the 1-bit quantization at the receiver side. The system performance is analytically investigated and simulated. Also, complexity and power consumption are investigated and studied. Furthermore, we address the challenges that are expected from the next generation M2M network as an integral part of the future IoT. This mainly includes the design of low-power low-cost machine with reduced bandwidth. The trade-off between cost, feasibility, and performance are also discussed. Because of the relaxation of the frequency and spatial diversities, in addition, to enabling the extended coverage mode, initial synchronization and cell search have new challenges for cellular-enabled M2M systems. We study conventional solutions with their pros and cons including timing acquisition, cell detection, and frequency offset estimation algorithms. We provide a technique to enhance the performance in the presence of the harsh detection environment for LTE-based machines. Furthermore, we present a frequency tracking algorithm for cellular M2M systems that utilizes the new repetitive feature of the broadcast channel symbols in next generation Long Term Evolution (LTE) systems. In the direction of narrowband IoT support, we propose a cell search and initial synchronization algorithm that utilizes the new set of narrowband synchronization signals. The proposed algorithms have been simulated at very low signal to noise ratios and in different fading environments

    Cell measurement in 5g unlicensed spectrum

    Get PDF
    Abstract. The objective of the thesis is to implement a firmware for cell measurement in an unlicensed spectrum. As a part of the thesis, theory for downlink physical layer and resources are reported as they are related to the implementation. Also, a short introduction to New radio cell measurement in licensed and unlicensed spectrum are presented and the main differences between those radio access technologies are shown. The biggest differences, but also the most challenging parts between licensed and unlicensed spectrum measurements are Listen-before-talk and expanded quasi co-location assumption. Listen-before-talk is used to evaluate the state of the channel and expanding quasi co-location assumption to all synchronization blocks to make time shift possible for the block. The performance of the implementation was measured in two different ways. The first way was to track a data and a program memory usage behave. The second way was to measure a cycle usage to see how central processing unit load behave in comparison to New radio. The results show clearly that the data memory usage increases linearly as a function of the candidate locations. Also, the program memory size increases about 5%. The results of the program memory show that the implementation reuses a lot of the New radio code as this new access technology does not increase the size of the program memory much. This kind of the results can also be seen with the cycle usage measurements. When it was measured only one candidate location in the unlicensed spectrum, the cycle usage increases about 10% when comparing to the New radio. However, the cycle usage did not increase linearly when more candidate location was measured. It was observed that the more candidate locations were measured, the less one candidate location measurement consumes the cycles on average. The observation supports the conclusion that the implementation reuses a lot of New radio code and there is a lot of the common code.Solumittaus lisensoimattomilla 5G-taajuuksilla. Tiivistelmä. Tavoite opinnäytetyölle on toteuttaa laiteohjelmisto, joka suorittaa solumittausta lisensoimattomilla 5G-taajuuksilla. Osana opinnäytetyötä on esitetty teoriaa siltä osin kuin se on toteutuksen kannalta olennaista. Teoria käsittelee 5G-yhteyden alalinkin fyysisen kerroksen eri osia ja resursseja sekä solumittausta lisensoidulla ja lisensoimattomilla 5G-taajuuksilla kuin myös niiden eroja solumittauksessa. Suurimmat erot lisensoidulla ja lisensoimattomilla 5G-taajuuksien solumittauksissa, ja samalla myös pääasiallinen haaste syntyi kuuntele ennen puhetta -teknologiasta sekä näennäisen yhteissijoittamisen käsitteen laajentamisesta. Kuuntele ennen puhetta -teknologiaa käytetään kanavan tilan seurantaan. Näennäisen yhteissijoittamisen käsiteen laajentaminen mahdollistaa synkronisointi lohkon siirtämisen aikatasolla suhteessa muihin synkronisointi lohkoihin. Toteutuksen suoritustasoa mitataan kahdella eri tavalla. Ensimmäinen tapa on seurata muistin käyttäytymistä solumittauksessa sekä toteutuksesta syntyvää ohjelmistomuistin kasvua. Toinen tapa seurata suorituskykyä on mitata syklejä. Syklejä mittaamalla saadaan tietää kuinka paljon prosessori kuormittuu mittausten aikana. Tulokset osoittavat, että muistin kulutus solumittauksessa kasvaa lineaarisesti ehdokaspaikkojen funktiona. Myös ohjelmistomuistin koko kasvaa noin 5%. Ohjelmistomuistin kasvusta voidaan päätellä, että laiteohjelmisto uudelleenkäyttää paljon vanhaa lisensoidun taajuuden mittausohjelmistoa, jolloin uuden radiopääsytekniikan toteutus ei juurikaan nosta ohjelmiston kokoa. Saman kaltaisia tuloksia voidaa myös huomata syklien mittauksessa. Kun mitattiin yhden lisensoimattoman taajuuden ehdokaspaikan mittauksesta aiheutuvaa syklien kulutusta, huomattiin sen nousevan 10% verrattuna vastaavaan lisensoidun taajuuden mittaukseen. Kuitenkin, kun suoritettiin useamman ehdokaspaikan solumittaus, huomattiin, että syklimäärä itseasiassa laski per ehdospaikan mittaus. Tästä voidaan päätellä, että kaikki syklit eivät kertaudu jokaisessa ehdokaspaikan mittauksessa, vaan ohjelmisto sisältää paljon niin sanottua yleistä ohjelmistoa. Mitä enemmän ehdospaikkoja on mitattavana, sitä vähemmän yksi mittauspaikka maksaa

    Algorithm-Architecture Co-Design for Digital Front-Ends in Mobile Receivers

    Get PDF
    The methodology behind this work has been to use the concept of algorithm-hardware co-design to achieve efficient solutions related to the digital front-end in mobile receivers. It has been shown that, by looking at algorithms and hardware architectures together, more efficient solutions can be found; i.e., efficient with respect to some design measure. In this thesis the main focus have been placed on two such parameters; first reduced complexity algorithms to lower energy consumptions at limited performance degradation, secondly to handle the increasing number of wireless standards that preferably should run on the same hardware platform. To be able to perform this task it is crucial to understand both sides of the table, i.e., both algorithms and concepts for wireless communication as well as the implications arising on the hardware architecture. It is easier to handle the high complexity by separating those disciplines in a way of layered abstraction. However, this representation is imperfect, since many interconnected "details" belonging to different layers are lost in the attempt of handling the complexity. This results in poor implementations and the design of mobile terminals is no exception. Wireless communication standards are often designed based on mathematical algorithms with theoretical boundaries, with few considerations to actual implementation constraints such as, energy consumption, silicon area, etc. This thesis does not try to remove the layer abstraction model, given its undeniable advantages, but rather uses those cross-layer "details" that went missing during the abstraction. This is done in three manners: In the first part, the cross-layer optimization is carried out from the algorithm perspective. Important circuit design parameters, such as quantization are taken into consideration when designing the algorithm for OFDM symbol timing, CFO, and SNR estimation with a single bit, namely, the Sign-Bit. Proof-of-concept circuits were fabricated and showed high potential for low-end receivers. In the second part, the cross-layer optimization is accomplished from the opposite side, i.e., the hardware-architectural side. A SDR architecture is known for its flexibility and scalability over many applications. In this work a filtering application is mapped into software instructions in the SDR architecture in order to make filtering-specific modules redundant, and thus, save silicon area. In the third and last part, the optimization is done from an intermediate point within the algorithm-architecture spectrum. Here, a heterogeneous architecture with a combination of highly efficient and highly flexible modules is used to accomplish initial synchronization in at least two concurrent OFDM standards. A demonstrator was build capable of performing synchronization in any two standards, including LTE, WiFi, and DVB-H
    corecore