453 research outputs found
Software Defined Radio Implementation of Carrier and Timing Synchronization for Distributed Arrays
The communication range of wireless networks can be greatly improved by using
distributed beamforming from a set of independent radio nodes. One of the key
challenges in establishing a beamformed communication link from separate radios
is achieving carrier frequency and sample timing synchronization. This paper
describes an implementation that addresses both carrier frequency and sample
timing synchronization simultaneously using RF signaling between designated
master and slave nodes. By using a pilot signal transmitted by the master node,
each slave estimates and tracks the frequency and timing offset and digitally
compensates for them. A real-time implementation of the proposed system was
developed in GNU Radio and tested with Ettus USRP N210 software defined radios.
The measurements show that the distributed array can reach a residual frequency
error of 5 Hz and a residual timing offset of 1/16 the sample duration for 70
percent of the time. This performance enables distributed beamforming for range
extension applications.Comment: Submitted to 2019 IEEE Aerospace Conferenc
Low-resolution ADC receiver design, MIMO interference cancellation prototyping, and PHY secrecy analysis.
This dissertation studies three independent research topics in the general field of wireless communications. The first topic focuses on new receiver design with low-resolution analog-to-digital converters (ADC). In future massive multiple-input-multiple-output (MIMO) systems, multiple high-speed high-resolution ADCs will become a bottleneck for practical applications because of the hardware complexity and power consumption. One solution to this problem is to adopt low-cost low-precision ADCs instead. In Chapter II, MU-MIMO-OFDM systems only equipped with low-precision ADCs are considered. A new turbo receiver structure is proposed to improve the overall system performance. Meanwhile, ultra-low-cost communication devices can enable massive deployment of disposable wireless relays. In Chapter III, the feasibility of using a one-bit relay cluster to help a power-constrained transmitter for distant communication is investigated. Nonlinear estimators are applied to enable effective decoding. The second topic focuses prototyping and verification of a LTE and WiFi co-existence system, where the operation of LTE in unlicensed spectrum (LTE-U) is discussed. LTE-U extends the benefits of LTE and LTE Advanced to unlicensed spectrum, enabling mobile operators to offload data traffic onto unlicensed frequencies more efficiently and effectively. With LTE-U, operators can offer consumers a more robust and seamless mobile broadband experience with better coverage and higher download speeds. As the coexistence leads to considerable performance instability of both LTE and WiFi transmissions, the LTE and WiFi receivers with MIMO interference canceller are designed and prototyped to support the coexistence in Chapter IV. The third topic focuses on theoretical analysis of physical-layer secrecy with finite blocklength. Unlike upper layer security approaches, the physical-layer communication security can guarantee information-theoretic secrecy. Current studies on the physical-layer secrecy are all based on infinite blocklength. Nevertheless, these asymptotic studies are unrealistic and the finite blocklength effect is crucial for practical secrecy communication. In Chapter V, a practical analysis of secure lattice codes is provided
Automatic transmit power control for power efficient communications in UAS
Nowadays, unmanned aerial vehicles (UAV) have become one of the most popular tools that can be used in commercial, scientific, agricultural and military applications. As drones become faster, smaller and cheaper, with the ability to add payloads, the usage of the drone can be versatile. In most of the cases, unmanned aerials systems (UAS) are equipped with a wireless communication system to establish a link with the ground control station to transfer the control commands, video stream, and payload data. However, with the limited onboard calculation resources in the UAS, and the growing size and volume of the payload data, computational complex signal processing such as deep learning cannot be easily done on the drone. Hence, in many drone applications, the UAS is just a tool for capturing and storing data, and then the data is post-processed off-line in a more powerful computing device. The other solution is to stream payload data to the ground control station (GCS) and let the powerful computer on the ground station to handle these data in real-time. With the development of communication techniques such as orthogonal frequency-division multiplexing (OFDM) and multiple-input multiple-output (MIMO) transmissions, it is possible to increase the spectral efficiency over large bandwidths and consequently achieve high transmission rates. However, the drone and the communication system are usually being designed separately, which means that regardless of the situation of the drone, the communication system is working independently to provide the data link. Consequently, by taking into account the position of the drone, the communication system has some room to optimize the link budget efficiency. In this master thesis, a power-efficient wireless communication downlink for UAS has been designed. It is achieved by developing an automatic transmit power control system and a custom OFDM communication system. The work has been divided into three parts: research of the drone communication system, an optimized communication system design and finally, FPGA implementation. In the first part, an overview on commercial drone communication schemes is presented and discussed. The advantages and disadvantages shown are the source of inspiration for improvement. With these ideas, an optimized scheme is presented. In the second part, an automatic transmit power control system for UAV wireless communication and a power-efficient OFDM downlink scheme are proposed. The automatic transmit power control system can estimate the required power level by the relative position between the drone and the GCS and then inform the system to adjust the power amplifier (PA) gain and power supply settings. To obtain high power efficiency for different output power levels, a searching strategy has been applied to the PA testbed to find out the best voltage supply and gain configurations. Besides, the OFDM signal generation developed in Python can encode data bytes to the baseband signal for testing purpose. Digital predistortion (DPD) linearization has been included in the transmitter’s design to guarantee the signal linearity. In the third part, two core algorithms: IFFT and LUT-based DPD, have been implemented in the FPGA platform to meet the real-time and high-speed I/O requirements. By using the high-level synthesis design process provided by Xilinx Corp, the algorithms are implemented as reusable IP blocks. The conclusion of the project is given in the end, including the summary of the proposed drone communication system and envisioning possible future lines of research
An FPGA-Based MIMO and Space-Time Processing Platform
Faced with the need to develop a research unit capable of up to twelve 20MHz bandwidth channels of real-time, space-time,and MIMO processing, the authors developed the STAR (space-time array research) platform. Analysis indicated that the possibledegree of processing complexity required in the platform was beyond that available from contemporary digital signal processors,and thus a novel approach was required toward the provision of baseband signal processing. This paper follows the analysis andthe consequential development of a flexible FPGA-based processing system. It describes the STAR platform and its use throughseveral novel implementations performed with it. Various pitfalls associated with the implementation of MIMO algorithms in realtime are highlighted, and finally, the development requirements for this FPGA-based solution are given to aid comparison withtraditional DSP development
Algorithm-Architecture Co-Design for Digital Front-Ends in Mobile Receivers
The methodology behind this work has been to use the concept of algorithm-hardware co-design to achieve efficient solutions related to the digital front-end in mobile receivers. It has been shown that, by looking at algorithms and hardware architectures together, more efficient solutions can be found; i.e., efficient with respect to some design measure. In this thesis the main focus have been placed on two such parameters; first reduced complexity algorithms to lower energy consumptions at limited performance degradation, secondly to handle the increasing number of wireless standards that preferably should run on the same hardware platform. To be able to perform this task it is crucial to understand both sides of the table, i.e., both algorithms and concepts for wireless communication as well as the implications arising on the hardware architecture. It is easier to handle the high complexity by separating those disciplines in a way of layered abstraction. However, this representation is imperfect, since many interconnected "details" belonging to different layers are lost in the attempt of handling the complexity. This results in poor implementations and the design of mobile terminals is no exception. Wireless communication standards are often designed based on mathematical algorithms with theoretical boundaries, with few considerations to actual implementation constraints such as, energy consumption, silicon area, etc. This thesis does not try to remove the layer abstraction model, given its undeniable advantages, but rather uses those cross-layer "details" that went missing during the abstraction. This is done in three manners: In the first part, the cross-layer optimization is carried out from the algorithm perspective. Important circuit design parameters, such as quantization are taken into consideration when designing the algorithm for OFDM symbol timing, CFO, and SNR estimation with a single bit, namely, the Sign-Bit. Proof-of-concept circuits were fabricated and showed high potential for low-end receivers. In the second part, the cross-layer optimization is accomplished from the opposite side, i.e., the hardware-architectural side. A SDR architecture is known for its flexibility and scalability over many applications. In this work a filtering application is mapped into software instructions in the SDR architecture in order to make filtering-specific modules redundant, and thus, save silicon area. In the third and last part, the optimization is done from an intermediate point within the algorithm-architecture spectrum. Here, a heterogeneous architecture with a combination of highly efficient and highly flexible modules is used to accomplish initial synchronization in at least two concurrent OFDM standards. A demonstrator was build capable of performing synchronization in any two standards, including LTE, WiFi, and DVB-H
Opportunistic error correction: when does it work best for OFDM systems?
The water-filling algorithm enables an energy-efficient OFDM-based transmitter by maximizing the capacity of a frequency selective fading channel. However, this optimal strategy requires the perfect channel state information at the transmitter that is not realistic in wireless applications. In this paper, we propose opportunistic error correction to maximize the data rate of OFDM systems without this limit. The key point of this approach is to reduce the dynamic range of the channel by discarding a part of the channel in deep fading. Instead of decoding all the information from all the sub-channels, we only recover the data via the strong sub-channels. Just like the water-filling principle, we increase the data rate over the stronger sub-channels by sacrificing the weaker sub-channels. In such a case, the total data rate over a frequency selective fading channel can be increased. Correspondingly, the noise floor can be increased to achieve a certain data rate compared to the traditional coding scheme. This leads to an energy-efficient receiver. However, it is not clear whether this method has advantages over the joint coding scheme in the narrow-band wireless system (e.g. the channel with a low dynamic range), which will be investigated in this paper
Recommended from our members
Array Architectures and Physical Layer Design for Millimeter-Wave Communications Beyond 5G
Ever increasing demands in mobile data rates have resulted in exploration of millimeter-wave (mmW) frequencies for the next generation (5G) wireless networks. Communications at mmW frequencies is presented with two keys challenges. Firstly, high propagation loss requires base stations (BSs) and user equipment (UEs) to use a large number of antennas and narrow beams to close the link with sufficient received signal power. Consequently, communications using narrow beams create a new challenge in channel estimation and link establishment based on fine angular probing. Current mmW system use analog phased arrays that can probe only one angle at the time which results in high latency during link establishment and channel tracking. It is desirable to design low latency beam training by exploring both physical layer designs and array architectures that could replace current 5G approaches and pave the way to the communications for frequency bands in higher mmW band and sub-THz region where larger antenna arrays and communications bandwidth can be exploited. To this end, we propose a novel signal processing techniques exploiting unique properties of mmW channel, and show both theoretically, in simulation and experiments its advantages over conventional approaches. Secondly, we explore different array architecture design and analyze their trade-offs between spectral efficiency and power consumption and area. For comprehensive comparison, we have developed a methodology for optimal design of system parameters for different array architecture candidates based on the spectral efficiency target, and use these parameters to estimate the array area and power consumption based on the circuits reported in the literature. We show that the hybrid analog and digital architectures have severe scalability concerns in radio frequency signal distribution with increased array size and spatial multiplexing levels, while the fully-digital array architectures have the best performance and power/area trade-offs.The developed approaches are based on a cross-disciplinary research that combines innovation in model based signal processing, machine learning, and radio hardware. This work is the first to apply compressive sensing (CS), a signal processing tool that exploits sparsity of mmW channel model, to accelerate beam training of mmW cellular system. The algorithm is designed to address practical issues including the requirement of cell discovery and synchronization that involves estimation of angular channel together with carrier frequency offset and timing offsets. We have analyzed the algorithm performance in the 5G compliant simulation and showed that an order of magnitude saving is achieved in initial access latency for the desired channel estimation accuracy. Moreover, we are the first to develop and implement a neural network assisted compressive beam alignment to deal with hardware impairments in mmW radios. We have used 60GHz mmW testbed to perform experiments and show that neural networks approach enhances alignment rate compared to CS. To further accelerate beam training, we proposed a novel frequency selective probing beams using the true-time-delay (TTD) analog array architecture. Our approach utilizes different subcarriers to scan different directions, and achieves a single-shot beam alignment, the fastest approach reported to date. Our comprehensive analysis of different array architectures and exploration of emerging architectures enabled us to develop an order of magnitude faster and energy efficient approaches for initial access and channel estimation in mmW systems
- …