66 research outputs found

    Carrier frequency offset recovery for zero-IF OFDM receivers

    Get PDF
    As trends in broadband wireless communications applications demand faster development cycles, smaller sizes, lower costs, and ever increasing data rates, engineers continually seek new ways to harness evolving technology. The zero intermediate frequency receiver architecture has now become popular as it has both economic and size advantages over the traditional superheterodyne architecture. Orthogonal Frequency Division Multiplexing (OFDM) is a popular multi-carrier modulation technique with the ability to provide high data rates over echo ladened channels. It has excellent robustness to impairments caused by multipath, which includes frequency selective fading. Unfortunately, OFDM is very sensitive to the carrier frequency offset (CFO) that is introduced by the downconversion process. The objective of this thesis is to develop and to analyze an algorithm for blind CFO recovery suitable for use with a practical zero-Intermediate Frequency (zero-IF) OFDM telecommunications system. A blind CFO recovery algorithm based upon characteristics of the received signal's power spectrum is proposed. The algorithm's error performance is mathematically analyzed, and the theoretical results are verified with simulations. Simulation shows that the performance of the proposed algorithm agrees with the mathematical analysis. A number of other CFO recovery techniques are compared to the proposed algorithm. The proposed algorithm performs well in comparison and does not suffer from many of the disadvantages of existing blind CFO recovery techniques. Most notably, its performance is not significantly degraded by noisy, frequency selective channels

    Engineering evaluations and studies. Volume 3: Exhibit C

    Get PDF
    High rate multiplexes asymmetry and jitter, data-dependent amplitude variations, and transition density are discussed

    Machine learning enabled millimeter wave cellular system and beyond

    Get PDF
    Millimeter-wave (mmWave) communication with advantages of abundant bandwidth and immunity to interference has been deemed a promising technology for the next generation network and beyond. With the help of mmWave, the requirements envisioned of the future mobile network could be met, such as addressing the massive growth required in coverage, capacity as well as traffic, providing a better quality of service and experience to users, supporting ultra-high data rates and reliability, and ensuring ultra-low latency. However, due to the characteristics of mmWave, such as short transmission distance, high sensitivity to the blockage, and large propagation path loss, there are some challenges for mmWave cellular network design. In this context, to enjoy the benefits from the mmWave networks, the architecture of next generation cellular network will be more complex. With a more complex network, it comes more complex problems. The plethora of possibilities makes planning and managing a complex network system more difficult. Specifically, to provide better Quality of Service and Quality of Experience for users in the such network, how to provide efficient and effective handover for mobile users is important. The probability of handover trigger will significantly increase in the next generation network, due to the dense small cell deployment. Since the resources in the base station (BS) is limited, the handover management will be a great challenge. Further, to generate the maximum transmission rate for the users, Line-of-sight (LOS) channel would be the main transmission channel. However, due to the characteristics of mmWave and the complexity of the environment, LOS channel is not feasible always. Non-line-of-sight channel should be explored and used as the backup link to serve the users. With all the problems trending to be complex and nonlinear, and the data traffic dramatically increasing, the conventional method is not effective and efficiency any more. In this case, how to solve the problems in the most efficient manner becomes important. Therefore, some new concepts, as well as novel technologies, require to be explored. Among them, one promising solution is the utilization of machine learning (ML) in the mmWave cellular network. On the one hand, with the aid of ML approaches, the network could learn from the mobile data and it allows the system to use adaptable strategies while avoiding unnecessary human intervention. On the other hand, when ML is integrated in the network, the complexity and workload could be reduced, meanwhile, the huge number of devices and data could be efficiently managed. Therefore, in this thesis, different ML techniques that assist in optimizing different areas in the mmWave cellular network are explored, in terms of non-line-of-sight (NLOS) beam tracking, handover management, and beam management. To be specific, first of all, a procedure to predict the angle of arrival (AOA) and angle of departure (AOD) both in azimuth and elevation in non-line-of-sight mmWave communications based on a deep neural network is proposed. Moreover, along with the AOA and AOD prediction, a trajectory prediction is employed based on the dynamic window approach (DWA). The simulation scenario is built with ray tracing technology and generate data. Based on the generated data, there are two deep neural networks (DNNs) to predict AOA/AOD in the azimuth (AAOA/AAOD) and AOA/AOD in the elevation (EAOA/EAOD). Furthermore, under an assumption that the UE mobility and the precise location is unknown, UE trajectory is predicted and input into the trained DNNs as a parameter to predict the AAOA/AAOD and EAOA/EAOD to show the performance under a realistic assumption. The robustness of both procedures is evaluated in the presence of errors and conclude that DNN is a promising tool to predict AOA and AOD in a NLOS scenario. Second, a novel handover scheme is designed aiming to optimize the overall system throughput and the total system delay while guaranteeing the quality of service (QoS) of each user equipment (UE). Specifically, the proposed handover scheme called O-MAPPO integrates the reinforcement learning (RL) algorithm and optimization theory. An RL algorithm known as multi-agent proximal policy optimization (MAPPO) plays a role in determining handover trigger conditions. Further, an optimization problem is proposed in conjunction with MAPPO to select the target base station and determine beam selection. It aims to evaluate and optimize the system performance of total throughput and delay while guaranteeing the QoS of each UE after the handover decision is made. Third, a multi-agent RL-based beam management scheme is proposed, where multiagent deep deterministic policy gradient (MADDPG) is applied on each small-cell base station (SCBS) to maximize the system throughput while guaranteeing the quality of service. With MADDPG, smart beam management methods can serve the UEs more efficiently and accurately. Specifically, the mobility of UEs causes the dynamic changes of the network environment, the MADDPG algorithm learns the experience of these changes. Based on that, the beam management in the SCBS is optimized according the reward or penalty when severing different UEs. The approach could improve the overall system throughput and delay performance compared with traditional beam management methods. The works presented in this thesis demonstrate the potentiality of ML when addressing the problem from the mmWave cellular network. Moreover, it provides specific solutions for optimizing NLOS beam tracking, handover management and beam management. For NLOS beam tracking part, simulation results show that the prediction errors of the AOA and AOD can be maintained within an acceptable range of ±2. Further, when it comes to the handover optimization part, the numerical results show the system throughput and delay are improved by 10% and 25%, respectively, when compared with two typical RL algorithms, Deep Deterministic Policy Gradient (DDPG) and Deep Q-learning (DQL). Lastly, when it considers the intelligent beam management part, numerical results reveal the convergence performance of the MADDPG and the superiority in improving the system throughput compared with other typical RL algorithms and the traditional beam management method

    Form vs. Function: Theory and Models for Neuronal Substrates

    Get PDF
    The quest for endowing form with function represents the fundamental motivation behind all neural network modeling. In this thesis, we discuss various functional neuronal architectures and their implementation in silico, both on conventional computer systems and on neuromorpic devices. Necessarily, such casting to a particular substrate will constrain their form, either by requiring a simplified description of neuronal dynamics and interactions or by imposing physical limitations on important characteristics such as network connectivity or parameter precision. While our main focus lies on the computational properties of the studied models, we augment our discussion with rigorous mathematical formalism. We start by investigating the behavior of point neurons under synaptic bombardment and provide analytical predictions of single-unit and ensemble statistics. These considerations later become useful when moving to the functional network level, where we study the effects of an imperfect physical substrate on the computational properties of several cortical networks. Finally, we return to the single neuron level to discuss a novel interpretation of spiking activity in the context of probabilistic inference through sampling. We provide analytical derivations for the translation of this ``neural sampling'' framework to networks of biologically plausible and hardware-compatible neurons and later take this concept beyond the realm of brain science when we discuss applications in machine learning and analogies to solid-state systems
    • …
    corecore