192 research outputs found

    A survey of the state of the art and focused research in range systems, task 2

    Get PDF
    Many communication, control, and information processing subsystems are modeled by linear systems incorporating tapped delay lines (TDL). Such optimized subsystems result in full precision multiplications in the TDL. In order to reduce complexity and cost in a microprocessor implementation, these multiplications can be replaced by single-shift instructions which are equivalent to powers of two multiplications. Since, in general, the obvious operation of rounding the infinite precision TDL coefficients to the nearest powers of two usually yield quite poor system performance, the optimum powers of two coefficient solution was considered. Detailed explanations on the use of branch-and-bound algorithms for finding the optimum powers of two solutions are given. Specific demonstration of this methodology to the design of a linear data equalizer and its implementation in assembly language on a 8080 microprocessor with a 12 bit A/D converter are reported. This simple microprocessor implementation with optimized TDL coefficients achieves a system performance comparable to the optimum linear equalization with full precision multiplications for an input data rate of 300 baud. The philosophy demonstrated in this implementation is dully applicable to many other microprocessor controlled information processing systems

    Quantization of Neural Network Equalizers in Optical Fiber Transmission Experiments

    Full text link
    The quantization of neural networks for the mitigation of the nonlinear and components' distortions in dual-polarization optical fiber transmission is studied. Two low-complexity neural network equalizers are applied in three 16-QAM 34.4 GBaud transmission experiments with different representative fibers. A number of post-training quantization and quantization-aware training algorithms are compared for casting the weights and activations of the neural network in few bits, combined with the uniform, additive power-of-two, and companding quantization. For quantization in the large bit-width regime of 5\geq 5 bits, the quantization-aware training with the straight-through estimation incurs a Q-factor penalty of less than 0.5 dB compared to the unquantized neural network. For quantization in the low bit-width regime, an algorithm dubbed companding successive alpha-blending quantization is suggested. This method compensates for the quantization error aggressively by successive grouping and retraining of the parameters, as well as an incremental transition from the floating-point representations to the quantized values within each group. The activations can be quantized at 8 bits and the weights on average at 1.75 bits, with a penalty of 0.5\leq 0.5~dB. If the activations are quantized at 6 bits, the weights can be quantized at 3.75 bits with minimal penalty. The computational complexity and required storage of the neural networks are drastically reduced, typically by over 90\%. The results indicate that low-complexity neural networks can mitigate nonlinearities in optical fiber transmission.Comment: 15 pages, 9 figures, 5 table

    Adaptive Bit Allocation With Reduced Feedback for Wireless Multicarrier Transceivers

    Get PDF
    With the increasing demand in the wireless mobile applications came a growing need to transmit information quickly and accurately, while consuming more and more bandwidth. To address this need, communication engineers started employing multicarrier modulation in their designs, which is suitable for high data rate transmission. Multicarrier modulation reduces the system's susceptibility to the frequency-selective fading channel, by transforming it into a collection of approximately flat subchannels. As a result, this makes it easier to compensate for the distortion introduced by the channel. This thesis concentrates on techniques for saving bandwidth usage when employing adaptive multicarrier modulation, where subcarrier parameters (bit and energy allocations) are modulated based on the channel state information feedback obtained from previous burst. Although bit and energy allocations can substantially increase error robustness and throughput of the system, the feedback information required at both ends of the transceiver can be large. The objective of this work is to compare different feedback compression techniques that could reduce the amount of feedback information required to perform adaptive bit and energy allocation in multicarrier transceivers. This thesis employs an approach for reducing the number of feedback transmissions by exploiting the time-correlation properties of a wireless channel and placing a threshold check on bit error rate (BER) values. Using quantization and source coding techniques, such as Huffman coding, Run length encoding and LZWalgorithms, the amount of feedback information has been compressed. These calculations have been done for different quantization levels to understand the relationship between quantization levels and system performance. These techniques have been applied to both OFDM and MIMO-OFDM systems

    Performance Evaluation of Adaptive Equalizer in a Communication System

    Get PDF
    This project deals with the study of the various kinds of interferences in a communication channel viz. Inter symbol Interference, Multipath Interference and Additive Interference. It deals with the design of an Adaptive Equalizer. The idea of the equalizer is to build (another) filter in the receiver that counteracts the effect of the channel. In essence, the equalizer must “unscatter” the impulse response. This can be stated as the goal of designing the equalizer E so that the impulse response of the combined channel and equalizer CE has a single spike. This can be solved using different techniques. In this project, we have implemented an ‘Adaptive Equalizer’ using four different algorithms in Matlab. We have suggested different ways to decide the coefficients of the equalizer. The first procedure (LEAST SQUARE ALGORITHM) minimizes the square of the symbol recovery error over a block of data which can be done by using matrix pseudo inversion. The second method (LEAST MEAN SQUARE ALGORITHM) involves minimizing the square of the error between the received data values and the transmitted values which are achieved via an adaptive element. The third method (DECISION DIRECTED ALGORITHM) and the fourth method (DISPERSION MINIMIZING ALGORITHM) are used when there is no training sequence and other performance functions are appropriate. In addition to this we have undertaken a study and realization of the Bit Error Rate of a communication system using VisSim Software

    Design and analysis of short word length DSP systems for mobile communication

    Get PDF
    Recently, many general purpose DSP applications such as Least Mean Squares-Like single-bit adaptive filter algorithms have been developed using the Short Word Length (SWL) technique and have been shown to achieve similar performance as multi-bit systems. A key function in SWL systems is sigma delta modulation (ΣΔM) that operates at an over sampling ratio (OSR), in contrast to the Nyquist rate sampling typically used in conventional multi-bit systems. To date, the analysis of SWL (or single-bit) DSP systems has tended to be performed using high-level tools such as MATLAB, with little work reported relating to their hardware implementation, particularly in Field Programmable Gate Arrays (FPGAs). This thesis explores the hardware implementation of single-bit systems in FPGA using the design and implementation in VHDL of a single-bit ternary FIR-like filter as an illustrative example. The impact of varying OSR and bit-width of the SWL filter has been determined, and a comparison undertaken between the area-performance-power characteristics of the SWL FIR filter compared to its equivalent multi-bit filter. In these experiments, it was found that single-bit FIR-like filter consistently outperforms the multi-bit technique in terms of its area, performance and power except at the highest filter orders analysed in this work. At higher orders, the ΣΔ approach retains its power and performance advantages but exhibits slightly higher chip area. In the second stage of thesis, three encoding techniques called canonical signed digit (CSD), 2’s complement, and Redundant Binary Signed Digit (RBSD) were designed and investigated on the basis of area-performance in FPGA at varying OSR. Simulation results show that CSD encoding technique does not offer any significant improvement as compared to 2’s complement as in multi-bit domain. Whereas, RBSD occupies double the chip area than other two techniques and has poor performance. The stability of the single-bit FIR-like filter mainly depends upon IIR remodulator due to its recursive nature. Thus, we have investigated the stability IIR remodulator and propose a new model using linear analysis and root locus approach that takes into account the widely accepted second order sigma-delta modulator state variable upper bounds. Using proposed model we have found new feedback parameters limits that is a key parameter in single-bit IIR remodulator stability analysis. Further, an analysis of single-bit adaptive channel equalization in MATLAB has been performed, which is intended to support the design and development of efficient algorithm for single-bit channel equalization. A new mathematical model has been derived with all inputs, coefficients and outputs in single-bit domain. The model was simulated using narrowband signals in MATLAB and investigated on the basis of symbol error rate (SER), signal-to-noise ratio (SNR) and minimum mean squared error (MMSE). The results indicate that single-bit adaptive channel equalization is achievable with narrowband signals but that the harsh quantization noise has great impact in the convergence

    Linear MMSE-Optimal Turbo Equalization Using Context Trees

    Get PDF
    Formulations of the turbo equalization approach to iterative equalization and decoding vary greatly when channel knowledge is either partially or completely unknown. Maximum aposteriori probability (MAP) and minimum mean square error (MMSE) approaches leverage channel knowledge to make explicit use of soft information (priors over the transmitted data bits) in a manner that is distinctly nonlinear, appearing either in a trellis formulation (MAP) or inside an inverted matrix (MMSE). To date, nearly all adaptive turbo equalization methods either estimate the channel or use a direct adaptation equalizer in which estimates of the transmitted data are formed from an expressly linear function of the received data and soft information, with this latter formulation being most common. We study a class of direct adaptation turbo equalizers that are both adaptive and nonlinear functions of the soft information from the decoder. We introduce piecewise linear models based on context trees that can adaptively approximate the nonlinear dependence of the equalizer on the soft information such that it can choose both the partition regions as well as the locally linear equalizer coefficients in each region independently, with computational complexity that remains of the order of a traditional direct adaptive linear equalizer. This approach is guaranteed to asymptotically achieve the performance of the best piecewise linear equalizer and we quantify the MSE performance of the resulting algorithm and the convergence of its MSE to that of the linear minimum MSE estimator as the depth of the context tree and the data length increase.Comment: Submitted to the IEEE Transactions on Signal Processin
    corecore