373 research outputs found

    Adaptive Baseband Pro cessing and Configurable Hardware for Wireless Communication

    Get PDF
    The world of information is literally at one’s fingertips, allowing access to previously unimaginable amounts of data, thanks to advances in wireless communication. The growing demand for high speed data has necessitated theuse of wider bandwidths, and wireless technologies such as Multiple-InputMultiple-Output (MIMO) have been adopted to increase spectral efficiency.These advanced communication technologies require sophisticated signal processing, often leading to higher power consumption and reduced battery life.Therefore, increasing energy efficiency of baseband hardware for MIMO signal processing has become extremely vital. High Quality of Service (QoS)requirements invariably lead to a larger number of computations and a higherpower dissipation. However, recognizing the dynamic nature of the wirelesscommunication medium in which only some channel scenarios require complexsignal processing, and that not all situations call for high data rates, allowsthe use of an adaptive channel aware signal processing strategy to provide adesired QoS. Information such as interference conditions, coherence bandwidthand Signal to Noise Ratio (SNR) can be used to reduce algorithmic computations in favorable channels. Hardware circuits which run these algorithmsneed flexibility and easy reconfigurability to switch between multiple designsfor different parameters. These parameters can be used to tune the operations of different components in a receiver based on feedback from the digitalbaseband. This dissertation focuses on the optimization of digital basebandcircuitry of receivers which use feedback to trade power and performance. Aco-optimization approach, where designs are optimized starting from the algorithmic stage through the hardware architectural stage to the final circuitimplementation is adopted to realize energy efficient digital baseband hardwarefor mobile 4G devices. These concepts are also extended to the next generation5G systems where the energy efficiency of the base station is improved.This work includes six papers that examine digital circuits in MIMO wireless receivers. Several key blocks in these receiver include analog circuits thathave residual non-linearities, leading to signal intermodulation and distortion.Paper-I introduces a digital technique to detect such non-linearities and calibrate analog circuits to improve signal quality. The concept of a digital nonlinearity tuning system developed in Paper-I is implemented and demonstratedin hardware. The performance of this implementation is tested with an analogchannel select filter, and results are presented in Paper-II. MIMO systems suchas the ones used in 4G, may employ QR Decomposition (QRD) processors tosimplify the implementation of tree search based signal detectors. However,the small form factor of the mobile device increases spatial correlation, whichis detrimental to signal multiplexing. Consequently, a QRD processor capableof handling high spatial correlation is presented in Paper-III. The algorithm and hardware implementation are optimized for carrier aggregation, which increases requirements on signal processing throughput, leading to higher powerdissipation. Paper-IV presents a method to perform channel-aware processingwith a simple interpolation strategy to adaptively reduce QRD computationcount. Channel properties such as coherence bandwidth and SNR are used toreduce multiplications by 40% to 80%. These concepts are extended to usetime domain correlation properties, and a full QRD processor for 4G systemsfabricated in 28 nm FD-SOI technology is presented in Paper-V. The designis implemented with a configurable architecture and measurements show thatcircuit tuning results in a highly energy efficient processor, requiring 0.2 nJ to1.3 nJ for each QRD. Finally, these adaptive channel-aware signal processingconcepts are examined in the scope of the next generation of communicationsystems. Massive MIMO systems increase spectral efficiency by using a largenumber of antennas at the base station. Consequently, the signal processingat the base station has a high computational count. Paper-VI presents a configurable detection scheme which reduces this complexity by using techniquessuch as selective user detection and interpolation based signal processing. Hardware is optimized for resource sharing, resulting in a highly reconfigurable andenergy efficient uplink signal detector

    Realizing In-Memory Baseband Processing for Ultra-Fast and Energy-Efficient 6G

    Full text link
    To support emerging applications ranging from holographic communications to extended reality, next-generation mobile wireless communication systems require ultra-fast and energy-efficient baseband processors. Traditional complementary metal-oxide-semiconductor (CMOS)-based baseband processors face two challenges in transistor scaling and the von Neumann bottleneck. To address these challenges, in-memory computing-based baseband processors using resistive random-access memory (RRAM) present an attractive solution. In this paper, we propose and demonstrate RRAM-implemented in-memory baseband processing for the widely adopted multiple-input-multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) air interface. Its key feature is to execute the key operations, including discrete Fourier transform (DFT) and MIMO detection using linear minimum mean square error (L-MMSE) and zero forcing (ZF), in one-step. In addition, RRAM-based channel estimation module is proposed and discussed. By prototyping and simulations, we demonstrate the feasibility of RRAM-based full-fledged communication system in hardware, and reveal it can outperform state-of-the-art baseband processors with a gain of 91.2Ă—\times in latency and 671Ă—\times in energy efficiency by large-scale simulations. Our results pave a potential pathway for RRAM-based in-memory computing to be implemented in the era of the sixth generation (6G) mobile communications.Comment: arXiv admin note: text overlap with arXiv:2205.0356

    Efficient DSP and Circuit Architectures for Massive MIMO: State-of-the-Art and Future Directions

    Full text link
    Massive MIMO is a compelling wireless access concept that relies on the use of an excess number of base-station antennas, relative to the number of active terminals. This technology is a main component of 5G New Radio (NR) and addresses all important requirements of future wireless standards: a great capacity increase, the support of many simultaneous users, and improvement in energy efficiency. Massive MIMO requires the simultaneous processing of signals from many antenna chains, and computational operations on large matrices. The complexity of the digital processing has been viewed as a fundamental obstacle to the feasibility of Massive MIMO in the past. Recent advances on system-algorithm-hardware co-design have led to extremely energy-efficient implementations. These exploit opportunities in deeply-scaled silicon technologies and perform partly distributed processing to cope with the bottlenecks encountered in the interconnection of many signals. For example, prototype ASIC implementations have demonstrated zero-forcing precoding in real time at a 55 mW power consumption (20 MHz bandwidth, 128 antennas, multiplexing of 8 terminals). Coarse and even error-prone digital processing in the antenna paths permits a reduction of consumption with a factor of 2 to 5. This article summarizes the fundamental technical contributions to efficient digital signal processing for Massive MIMO. The opportunities and constraints on operating on low-complexity RF and analog hardware chains are clarified. It illustrates how terminals can benefit from improved energy efficiency. The status of technology and real-life prototypes discussed. Open challenges and directions for future research are suggested.Comment: submitted to IEEE transactions on signal processin

    Low Complexity Wireless Communication Digital Baseband Design

    Get PDF
    abstract: This thesis addresses two problems in digital baseband design of wireless communication systems, namely, those in Internet of Things (IoT) terminals that support long range communications and those in full-duplex systems that are designed for high spectral efficiency. IoT terminals for long range communications are typically based on Orthogonal Frequency-Division Multiple Access (OFDMA) and spread spectrum technologies. In order to design an efficient baseband architecture for such terminals, the workload profiles of both systems are analyzed. Since frame detection unit has by far the highest computational load, a simple architecture that uses only a scalar datapath is proposed. To optimize for low energy consumption, application-specific instructions that minimize register accesses and address generation units for streamlined memory access are introduced. Two parameters, namely, correlation window size and threshold value, affect the detection probability, the false alarm probability and hence energy consumption. Next, energy-optimal operation settings for correlation window size and threshold value are derived for different channel conditions. For both good and bad channel conditions, if target signal detection probability is greater than 0.9, the baseband processor has the lowest energy when the frame detection algorithm uses the longest correlation window and the highest threshold value. A full-duplex system has high spectral efficiency but suffers from self-interference. Part of the interference can be cancelled digitally using equalization techniques. The cancellation performance and computation complexity of the competing equalization algorithms, namely, Least Mean Square (LMS), Normalized LMS (NLMS), Recursive Least Square (RLS) and feedback equalizers based on LMS, NLMS and RLS are analyzed, and a trade-off between performance and complexity established. NLMS linear equalizer is found to be suitable for resource-constrained mobile devices and NLMS decision feedback equalizer is more appropriate for base stations that are not energy constrained.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Resource Allocation in 4G and 5G Networks: A Review

    Get PDF
    The advent of 4G and 5G broadband wireless networks brings several challenges with respect to resource allocation in the networks. In an interconnected network of wireless devices, users, and devices, all compete for scarce resources which further emphasizes the fair and efficient allocation of those resources for the proper functioning of the networks. The purpose of this study is to discover the different factors that are involved in resource allocation in 4G and 5G networks. The methodology used was an empirical study using qualitative techniques by performing literature reviews on the state of art in 4G and 5G networks, analyze their respective architectures and resource allocation mechanisms, discover parameters, criteria and provide recommendations. It was observed that resource allocation is primarily done with radio resource in 4G and 5G networks, owing to their wireless nature, and resource allocation is measured in terms of delay, fairness, packet loss ratio, spectral efficiency, and throughput. Minimal consideration is given to other resources along the end-to-end 4G and 5G network architectures. This paper defines more types of resources, such as electrical energy, processor cycles and memory space, along end-to-end architectures, whose allocation processes need to be emphasized owing to the inclusion of software defined networking and network function virtualization in 5G network architectures. Thus, more criteria, such as electrical energy usage, processor cycle, and memory to evaluate resource allocation have been proposed.  Finally, ten recommendations have been made to enhance resource allocation along the whole 5G network architecture

    Design and Implementation of an RF Front-End for Software Defined Radios

    Get PDF
    Software Defined Radios have brought a major reformation in the design standards for radios, in which a large portion of the functionality is implemented through pro­ grammable signal processing devices, giving the radio the ability to change its op­ erating parameters to accommodate new features and capabilities. A software radio approach reduces the content of radio frequency and other analog components of the traditional radios and emphasizes digital signal processing to enhance overall receiver flexibility. Field Programmable Gate Arrays (FPGA) are a suitable technology for the hardware platform as they offer the potential of hardware-like performance coupled with software-like programmability. Software defined radio is a very broad field, encompassing the design of various technologies all the way from the antenna to RF, IF, and baseband digital design. The RF section primarily consists of analog hardware modules. The IF and baseband sections are primarily digital. It is the general process of the radio to convert the incoming signal from RF to IF and then IF to baseband for better signal processing system. In this thesis, some of major building blocks of a Software defined radio are de­ signed and implemented using FPGAs. The design of a Digital front end, which provides the bridge between the baseband and analog RF portions of a wireless receiver, is synthesized. The Digital front end receiver consists of a digital down converter(DDC) which in turn comprises of a direct digital frequency synthesizer (DDFS), a phase accumulator and a low pass filter. The signal processing block of the DDFS is executed using Co-ordinate Rotation Digital Computer (CORDIC) iii Abstract algorithm. Cascaded-Integrator-Comb filters (CIC) are implemented for changing the sample rate of the incoming data. Application of a DDC includes software ra­ dios, multicarrier, multimode digital receivers, micro and pico cell systems,broadband data applications, instrumentation and test equipment and in-building wireless tele­ phony. Also, in this thesis, interfaces for connecting Texas Instruments high speed and high resolution Analog-to-Digital converters (ADC) and Digital-to-Analog converters (DAC) with Xilinx Virtex-5 FPGAs are also implemented and demonstrated

    Timing-Error Tolerance Techniques for Low-Power DSP: Filters and Transforms

    Get PDF
    Low-power Digital Signal Processing (DSP) circuits are critical to commercial System-on-Chip design for battery powered devices. Dynamic Voltage Scaling (DVS) of digital circuits can reclaim worst-case supply voltage margins for delay variation, reducing power consumption. However, removing static margins without compromising robustness is tremendously challenging, especially in an era of escalating reliability concerns due to continued process scaling. The Razor DVS scheme addresses these concerns, by ensuring robustness using explicit timing-error detection and correction circuits. Nonetheless, the design of low-complexity and low-power error correction is often challenging. In this thesis, the Razor framework is applied to fixed-precision DSP filters and transforms. The inherent error tolerance of many DSP algorithms is exploited to achieve very low-overhead error correction. Novel error correction schemes for DSP datapaths are proposed, with very low-overhead circuit realisations. Two new approximate error correction approaches are proposed. The first is based on an adapted sum-of-products form that prevents errors in intermediate results reaching the output, while the second approach forces errors to occur only in less significant bits of each result by shaping the critical path distribution. A third approach is described that achieves exact error correction using time borrowing techniques on critical paths. Unlike previously published approaches, all three proposed are suitable for high clock frequency implementations, as demonstrated with fully placed and routed FIR, FFT and DCT implementations in 90nm and 32nm CMOS. Design issues and theoretical modelling are presented for each approach, along with SPICE simulation results demonstrating power savings of 21 – 29%. Finally, the design of a baseband transmitter in 32nm CMOS for the Spectrally Efficient FDM (SEFDM) system is presented. SEFDM systems offer bandwidth savings compared to Orthogonal FDM (OFDM), at the cost of increased complexity and power consumption, which is quantified with the first VLSI architecture

    Automatic transmit power control for power efficient communications in UAS

    Get PDF
    Nowadays, unmanned aerial vehicles (UAV) have become one of the most popular tools that can be used in commercial, scientific, agricultural and military applications. As drones become faster, smaller and cheaper, with the ability to add payloads, the usage of the drone can be versatile. In most of the cases, unmanned aerials systems (UAS) are equipped with a wireless communication system to establish a link with the ground control station to transfer the control commands, video stream, and payload data. However, with the limited onboard calculation resources in the UAS, and the growing size and volume of the payload data, computational complex signal processing such as deep learning cannot be easily done on the drone. Hence, in many drone applications, the UAS is just a tool for capturing and storing data, and then the data is post-processed off-line in a more powerful computing device. The other solution is to stream payload data to the ground control station (GCS) and let the powerful computer on the ground station to handle these data in real-time. With the development of communication techniques such as orthogonal frequency-division multiplexing (OFDM) and multiple-input multiple-output (MIMO) transmissions, it is possible to increase the spectral efficiency over large bandwidths and consequently achieve high transmission rates. However, the drone and the communication system are usually being designed separately, which means that regardless of the situation of the drone, the communication system is working independently to provide the data link. Consequently, by taking into account the position of the drone, the communication system has some room to optimize the link budget efficiency. In this master thesis, a power-efficient wireless communication downlink for UAS has been designed. It is achieved by developing an automatic transmit power control system and a custom OFDM communication system. The work has been divided into three parts: research of the drone communication system, an optimized communication system design and finally, FPGA implementation. In the first part, an overview on commercial drone communication schemes is presented and discussed. The advantages and disadvantages shown are the source of inspiration for improvement. With these ideas, an optimized scheme is presented. In the second part, an automatic transmit power control system for UAV wireless communication and a power-efficient OFDM downlink scheme are proposed. The automatic transmit power control system can estimate the required power level by the relative position between the drone and the GCS and then inform the system to adjust the power amplifier (PA) gain and power supply settings. To obtain high power efficiency for different output power levels, a searching strategy has been applied to the PA testbed to find out the best voltage supply and gain configurations. Besides, the OFDM signal generation developed in Python can encode data bytes to the baseband signal for testing purpose. Digital predistortion (DPD) linearization has been included in the transmitter’s design to guarantee the signal linearity. In the third part, two core algorithms: IFFT and LUT-based DPD, have been implemented in the FPGA platform to meet the real-time and high-speed I/O requirements. By using the high-level synthesis design process provided by Xilinx Corp, the algorithms are implemented as reusable IP blocks. The conclusion of the project is given in the end, including the summary of the proposed drone communication system and envisioning possible future lines of research

    A 2x2 bit multiplier using hybrid 13t full adder with vedic mathematics method

    Get PDF
    Various arithmetic circuits such as multipliers require full adder (FA) as the main block for the circuit to operate. Speed and energy consumption become very vital in design consideration for a low power adder. In this paper, a 2x2 bit Vedic multiplier using hybrid full adder (HFA) with 13 transistors (13T) had been designed successfully. The design was simulated using Synopsys Custom Tools in General Purpose Design Kit (GPDK) 90 nm CMOS technology process. In this design, four AND gates and two hybrid FA (HFAs) are cascaded together and each HFA is constructed from three modules. The cascaded module is arranged in the Vedic mathematics algorithm. This algorithm satisfied the requirement of a fast multiplication operation because of the vertical and crosswise architecture from the Urdhva Triyakbyam Sutra which reduced the number of partial products compared to the conventional multiplication algorithm. With the combination of hybrid full adder and Vedic mathematics, a new combination of multiplier method with low power and low delay is produced. Performance parameters such as power consumption and delay were compared to some of the existing designs. With a 1V voltage supply, the average power consumption of the proposed multiplier was found to be 22.96 µW and a delay of 161 ps
    • …
    corecore