156 research outputs found
Estimation and detection techniques for doubly-selective channels in wireless communications
A fundamental problem in communications is the estimation of the channel.
The signal transmitted through a communications channel undergoes distortions
so that it is often received in an unrecognizable form at the receiver.
The receiver must expend significant signal processing effort in order to be
able to decode the transmit signal from this received signal. This signal processing
requires knowledge of how the channel distorts the transmit signal,
i.e. channel knowledge. To maintain a reliable link, the channel must be
estimated and tracked by the receiver.
The estimation of the channel at the receiver often proceeds by transmission
of a signal called the 'pilot' which is known a priori to the receiver.
The receiver forms its estimate of the transmitted signal based on how this
known signal is distorted by the channel, i.e. it estimates the channel from
the received signal and the pilot. This design of the pilot is a function of the
modulation, the type of training and the channel. [Continues.
Optics for AI and AI for Optics
Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields
Scalable Front End Designs for Communication and Learning
In this work we provide three examples of estimation/detection problems, for which customizing the Front End to the specific application makes the system more efficient and scalable. The three problems we consider are all classical, but face new scalability challenges. This introduces additional constraints, accounting for which results in front end designs that are very distinct from the conventional approaches. The first two case studies pertain to the canonical problems of synchronization and equalization for communication links. As the system bandwidths scale, challenges arise due to the limiting resolution of analog-to-digital converters (ADCs). We discuss system designs that react to this bottleneck by drastically relaxing the precision requirements of the front end and correspondingly modifying the back end algorithms using Bayesian principles. The third problem we discuss belongs to the field of computer vision. Inspired by the research in neuroscience about the mammalian visual system, we redesign the front end of a machine vision system to be neuro-mimetic, followed by layers of unsupervised learning using simple k-means clustering. This results in a framework that is intuitive, more computationally efficient compared to the approach of supervised deep networks, and amenable to the increasing availability of large amounts of unlabeled data. We first consider the problem of blind carrier phase and frequency synchronization in order to obtain insight into the performance limitations imposed by severe quantization constraints. We adopt a mixed signal analog front end that coarsely quantizes the phase and employs a digitally controlled feedback that applies a phase shift prior to the ADC, this acts as a controllable dither signal and aids in the estimation process. We propose a control policy for the feedback and show that combined with blind Bayesian algorithms, it results in excellent performance, close to that of an unquantized system.Next, we take up the problem of channel equalization with severe limits on the number of slicers available for the ADC. We find that the standard flash ADC architecture can be highly sub-optimal in the presence of such constraints. Hence we explore a ``space-time'' generalization of the flash architecture by allowing a fixed numberof slicers to be dispersed in time (sampling phase) as well as space (i.e., amplitude). We show that optimizing the slicer locations, conditioned on the channel, results in significant gains in the bit error rate (BER) performance. Finally, we explore alternative ways of learning convolutionalnets for machine vision, making it easier to interpret and simpler to implement than currently used purely supervised nets. In particular, we investigate a framework that combines a neuro-mimetic front end (designed in collaboration with the neuroscientists from the psychology department at UCSB) together with unsupervised feature extraction based on clustering. Supervised classification, using a generic support vector machine (SVM), is applied at the end.We obtain competitive classification results on standard image databases, beating the state of the art for NORB (uniform-normalized) and approaching it for MNIST
Efficient channel equalization algorithms for multicarrier communication systems
Blind adaptive algorithm that updates time-domain equalizer (TEQ) coefficients by Adjacent Lag Auto-correlation Minimization (ALAM) is proposed to shorten the channel for multicarrier modulation (MCM) systems. ALAM is an addition to the family of several existing correlation based algorithms that can achieve similar or better performance to existing algorithms with lower complexity. This is achieved by designing a cost function without the sum-square and utilizing symmetrical-TEQ property to reduce the complexity of adaptation of TEQ to half of the existing one. Furthermore, to avoid the limitations of lower unstable bit rate and high complexity, an adaptive TEQ using equal-taps constraints (ETC) is introduced to maximize the bit rate with the lowest complexity. An IP core is developed for the low-complexity ALAM (LALAM) algorithm to be implemented on an FPGA. This implementation is extended to include the implementation of the moving average (MA) estimate for the ALAM algorithm referred as ALAM-MA. Unit-tap constraint (UTC) is used instead of unit-norm constraint (UNC) while updating the adaptive algorithm to avoid all zero solution for the TEQ taps. The IP core is implemented on Xilinx Vertix II Pro XC2VP7-FF672-5 for ADSL receivers and the gate level simulation guaranteed successful operation at a maximum frequency of 27 MHz and 38 MHz for ALAM-MA and LALAM algorithm, respectively. FEQ equalizer is used, after channel shortening using TEQ, to recover distorted QAM signals due to channel effects. A new analytical learning based framework is proposed to jointly solve equalization and symbol detection problems in orthogonal frequency division multiplexing (OFDM) systems with QAM signals. The framework utilizes extreme learning machine (ELM) to achieve fast training, high performance, and low error rates. The proposed framework performs in real-domain by transforming a complex signal into a single 2–tuple real-valued vector. Such transformation offers equalization in real domain with minimum computational load and high accuracy. Simulation results show that the proposed framework outperforms other learning based equalizers in terms of symbol error rates and training speeds
- …