703 research outputs found

    Adaptive Background Compensation of Frequency Interleaved DACs with Application to Coherent Optical Transceivers

    Get PDF
    Digital-to-analog converters (DACs) with bandwidths larger than 70 GHz and sampling rates in excess of 170 GS/s will soon be required in ultra-high speed communication applications such as coherent optical transceivers operating at symbol rates of 140 GBd and beyond. Frequency interleaving has been proposed as a way to break the bandwidth bottleneck in such applications. Splitting the input signal into multiple frequency bands reduces the required bandwidth per interleaved DAC and therefore it enables the synthesis of greater bandwidth signals in the reconstructed output. Elaborate digital signal processing (DSP) is required to seamlessly stitch together the sub-bands and compensate the errors of the analog signal path, which would otherwise severely degrade the performance of the communication system. Adaptive DSP techniques are required to automatically compensate errors caused by process, voltage, and temperature variations in the technology (e.g., CMOS, SiGe, etc.) implementations of the data converters, and therefore ensure high manufacturing yield. These techniques must operate in background mode to avoid interfering with the normal operation of the communication system. This work introduces an adaptive background compensation scheme for frequency interleaved DACs (FI-DACs). The primary application example is a 128 GBd QAM16 coherent optical transceiver. However, the technique is applicable to other types of communication transceivers, and it can be generalized to arbitrary signals, as long as they are stationary or quasi-stationary and have a wideband continuous spectrum. The key elements of the proposed technique are a MIMO equalizer and the backpropagation algorithm. Numerical simulation results for the aforementioned application example show that the signal to noise and distortion ratio (SNDR) of the FI-DAC is boosted by more than 25 dB when the proposed compensation technique is applied in the presence of typical analog mismatches. Furthermore, the optical signal to noise ratio penalty of the optical transceiver is reduced from 6 dB to 0.1 dB.Fil: Galetto, Agustín C.. Fundación Fulgor; ArgentinaFil: Reyes, Benjamín Tomás. Fundación Fulgor; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; ArgentinaFil: Morero, Damián Alfonso. Universidad Nacional de Córdoba; ArgentinaFil: Hueda, Mario Rafael. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba. Instituto de Estudios Avanzados en Ingeniería y Tecnología. Universidad Nacional de Córdoba. Facultad de Ciencias Exactas Físicas y Naturales. Instituto de Estudios Avanzados en Ingeniería y Tecnología; Argentin

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Complex-valued Adaptive Digital Signal Enhancement For Applications In Wireless Communication Systems

    Get PDF
    In recent decades, the wireless communication industry has attracted a great deal of research efforts to satisfy rigorous performance requirements and preserve high spectral efficiency. Along with this trend, I/Q modulation is frequently applied in modern wireless communications to develop high performance and high data rate systems. This has necessitated the need for applying efficient complex-valued signal processing techniques to highly-integrated, multi-standard receiver devices. In this dissertation, novel techniques for complex-valued digital signal enhancement are presented and analyzed for various applications in wireless communications. The first technique is a unified block processing approach to generate the complex-valued conjugate gradient Least Mean Square (LMS) techniques with optimal adaptations. The proposed algorithms exploit the concept of the complex conjugate gradients to find the orthogonal directions for updating the adaptive filter coefficients at each iteration. Along each orthogonal direction, the presented algorithms employ the complex Taylor series expansion to calculate time-varying convergence factors tailored for the adaptive filter coefficients. The performance of the developed technique is tested in the applications of channel estimation, channel equalization, and adaptive array beamforming. Comparing with the state of the art methods, the proposed techniques demonstrate improved performance and exhibit desirable characteristics for practical use. The second complex-valued signal processing technique is a novel Optimal Block Adaptive algorithm based on Circularity, OBA-C. The proposed OBA-C method compensates for a complex imbalanced signal by restoring its circularity. In addition, by utilizing the complex iv Taylor series expansion, the OBA-C method optimally updates the adaptive filter coefficients at each iteration. This algorithm can be applied to mitigate the frequency-dependent I/Q mismatch effects in analog front-end. Simulation results indicate that comparing with the existing methods, OBA-C exhibits superior convergence speed while maintaining excellent accuracy. The third technique is regarding interference rejection in communication systems. The research on both LMS and Independent Component Analysis (ICA) based techniques continues to receive significant attention in the area of interference cancellation. The performance of the LMS and ICA based approaches is studied for signals with different probabilistic distributions. Our research indicates that the ICA-based approach works better for super-Gaussian signals, while the LMS-based method is preferable for sub-Gaussian signals. Therefore, an appropriate choice of interference suppression algorithms can be made to satisfy the ever-increasing demand for better performance in modern receiver design

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates

    Low Noise, Jitter Tolerant Continuous-Time Sigma-Delta Modulator

    Get PDF
    The demand for higher data rates in receivers with carrier aggregation (CA) such as LTE, increases the efforts to integrate large number of wireless services into single receiving path, so it needs to digitize the signal in intermediate or high frequencies. It relaxes most of the front-end blocks but makes the design of ADC very challenging. Solving the bottleneck associated with ADC in receiver architecture is a major focus of many ongoing researches. Recently, continuous time Sigma-Delta analog-to-digital converters (ADCs) are getting more attention due to their inherent filtering properties, lower power consumption and wider input bandwidth. But, it suffers from several non-idealities such as clock jitter and ELD which decrease the ADC performance. This dissertation presents two projects that address CT-ΣΔ modulator non-idealities. One of the projects is a CT- ΣΔ modulator with 10.9 Effective Number of Bits (ENOB) with Gradient Descent (GD) based calibration technique. The GD algorithm is used to extract loop gain transfer function coefficients. A quantization noise reduction technique is then employed to improve the Signal to Quantization Noise Ratio (SQNR) of the modulator using a 7-bit embedded quantizer. An analog fast path feedback topology is proposed which uses an analog differentiator in order to compensate excess loop delay. This approach relaxes the requirements of the amplifier placed in front of the quantizer. The modulator is implemented using a third order loop filter with a feed-forward compensation paths and a 3-bit quantizer in the feedback loop. In order to save power and improve loop linearity a two-stage class-AB amplifier is developed. The prototype modulator is implemented in 0.13μm CMOS technology, which achieves peak Signal to Noise and Distortion Ratio (SNDR) of 67.5dB while consuming total power of 8.5-mW under a 1.2V supply with an over sampling ratio of 10 at 300MHz sampling frequency. The prototype achieves Walden's Figure of Merit (FoM) of 146fJ/step. The second project addresses clock jitter non-ideality in Continuous Time Sigma Delta modulators (CT- ΣΔM), the modulator suffer from performance degradation due to uncertainty in timing of clock at digital-to-analog converter (DAC). This thesis proposes to split the loop filter into two parts, analog and digital part to reduce the sensitivity of feedback DAC to clock jitter. By using the digital first-order filter after the quantizer, the effect of clock jitter is reduced without changing signal transfer function (STF). On the other hand, as one pole of the loop filter is implemented digitally, the power and area are reduced by minimizing active analog elements. Moreover, having more digital blocks in the loop of CT- ΣΔM makes it less sensitive to process, voltage, and temperature variations. We also propose the use of a single DAC with a current divider to implement feedback coefficients instead of two DACs to decrease area and clock routing. The prototype is implemented in TSMC 40 nm technology and occupies 0.06 mm^2 area; the proposed solution consumes 6.9 mW, and operates at 500 MS/s. In a 10 MHz bandwidth, the measured dynamic range (DR), peak signal-to-noise-ratio (SNR), and peak signal-to-noise and distortion (SNDR) ratios in presence of 4.5 ps RMS clock jitter (0.22% clock period) are 75 dB, 68 dB, and 67 dB, respectively. The proposed structure is 10 dB more tolerant to clock jitter when compared to the conventional ΣΔM design for similar loop filter

    Energy-efficient systems for information transfer and processing

    Get PDF
    Machine learning (ML) systems are finding excellent utility in tackling the data deluge of the big data era thanks to the exponential increase in computing power. Current ML systems adopt either centralized cloud computing or distributed edge computing. In both, the challenge of energy efficiency has been drawing increased attention. In cloud computing, data transfer due to inter-chip, inter-board, inter-shelf and inter-rack communications (I/O interface) within data centers is one of the dominant energy costs. This will intensify with the growing demand for increased I/O bandwidth of high-performance computing in data centers. On the other hand, in edge computing, energy efficiency is the primary design challenge, as mobile devices have limited energy, computation and storage resources. This challenge is being exacerbated by the need to embed ML algorithms such as convolutional neural networks (CNNs) for enabling local on-device inference capabilities. In this dissertation, we investigate techniques to address these challenges. To address the energy efficiency challenge in data centers, this dissertation focuses on reducing the energy consumption of the I/O interface. Specifically, in the emerging analog-to-digital converter (ADC)-based multi-Gb/s serial link receivers, the power dissipation is dominated by the ADC. ADCs in serial links employ signal-to-noise-and-distortion-ratio (SNDR) and effective-number-of-bits (ENOB) as performance metrics because these are the standard for generic ADC design. This dissertation presents the use of information-based metrics such as bit-error-rate (BER) to design a BER-optimal ADC (BOA) for serial links. First, theoretical analysis is developed to show when the benefits of BOA over a conventional uniform ADC (CUA) in a serial link receiver are substantial. Second, a \unit[4]{GS/s}, 4-\mbox{\textrm{bit}} on-chip ADC in a \unit[90]{nm} CMOS process is designed and integrated into a 4 Gb/s serial link receiver to verify the aforementioned analysis. Specifically, measured results demonstrate that a 3-\mathrm{bit} BOA receiver outperforms a 4-\mathrm{bit} CUA receiver at a BER <10^{-12} and provides \unit[50]{\%} power savings in the ADC. In the process, it is demonstrated conclusively that BER as opposed to ENOB is a better metric when designing ADCs for serial links. For the problem of resource-constrained computing at the edge, this dissertation tackles the issue of energy-efficient implementation of ML algorithms, particularly CNNs which have recently gained considerable interest due to their record-breaking performance in many recognition tasks. However, their implementation complexity hinders their deployment on power-constrained embedded platforms. This dissertation develops two techniques for energy-efficient CNN design. The first technique is a predictive CNN (PredictiveNet), which makes use of high sparsity in well-trained CNNs to bypass a large fraction of power-dominant convolutions at runtime without modifying the CNN structure. Analysis supported by simulations is provided to justify PredictiveNet's effectiveness. When applied to both the MNIST and CIFAR-10 datasets, simulation results show that PredictiveNet achieves 7.2\times and 4.4\times reduction in the computational and representational costs, respectively, compared with a conventional CNN. It is further shown that PredictiveNet enables computational and representational cost reductions of 2.5\times and 1.7\times, respectively, compared to a state-of-the-art CNN, while incurring only 0.02 classification accuracy loss. The second technique is a variation-tolerant architecture for CNN capable of operating in near threshold voltage (NTV) regime for aggressive energy efficiency. It is well-known that NTV computing can achieve up to 10\times energy savings but is sensitive to process, temperature, and voltage (PVT) variations which can lead to timing errors. To leverage the great potential of NTV for energy efficiency, this dissertation develops a new statistical error compensation (SEC) technique referred to as rank decomposed SEC (RD-SEC). RD-SEC makes use of inherent redundancy in CNNs to handle timing errors due to NTV computing. When evaluated in CNNs for both the MNIST and CIFAR-10 datasets, simulation results in \unit[45]{nm} CMOS show that RD-SEC enables robust CNNs operating in the NTV regime. Specifically, the proposed RD-SEC can achieve up to 11\times improvement in variation tolerance and enable up to 113\times reduction in the standard deviation of classification accuracy while incurring marginal degradation in the median classification accuracy

    Statistical Learning in Chip (SLIC) (Invited Paper)

    Get PDF
    Abstract-Despite best efforts, integrated systems are &quot;born&quot; (manufactured) with a unique &apos;personality&apos; that stems from our inability to precisely fabricate their underlying circuits, and create software a priori for controlling the resulting uncertainty. It is possible to use sophisticated test methods to identify the bestperforming systems but this would result in unacceptable yields and correspondingly high costs. The system personality is further shaped by its environment (e.g., temperature, noise and supply voltage) and usage (i.e., the frequency and type of applications executed), and since both can fluctuate over time, so can the system&apos;s personality. Systems also &quot;grow old&quot; and degrade due to various wear-out mechanisms (e.g., negative-bias temperature instability), and unexpectedly due to various early-life failure sources. These &quot;nature and nurture&quot; influences make it extremely difficult to design a system that will operate optimally for all possible personalities. To address this challenge, we propose to develop statistical learning in-chip (SLIC). SLIC is a holistic approach to integrated system design based on continuously learning key personality traits on-line, for selfevolving a system to a state that optimizes performance hierarchically across the circuit, platform, and application levels. SLIC will not only optimize integrated-system performance but also reduce costs through yield enhancement since systems that would have before been deemed to have weak personalities (unreliable, faulty, etc.) can now be recovered through the use of SLIC

    Optics for AI and AI for Optics

    Get PDF
    Artificial intelligence is deeply involved in our daily lives via reinforcing the digital transformation of modern economies and infrastructure. It relies on powerful computing clusters, which face bottlenecks of power consumption for both data transmission and intensive computing. Meanwhile, optics (especially optical communications, which underpin today’s telecommunications) is penetrating short-reach connections down to the chip level, thus meeting with AI technology and creating numerous opportunities. This book is about the marriage of optics and AI and how each part can benefit from the other. Optics facilitates on-chip neural networks based on fast optical computing and energy-efficient interconnects and communications. On the other hand, AI enables efficient tools to address the challenges of today’s optical communication networks, which behave in an increasingly complex manner. The book collects contributions from pioneering researchers from both academy and industry to discuss the challenges and solutions in each of the respective fields

    Process Control Applications in Microbial Fuel Cells(MFC)

    Get PDF
    abstract: Microbial fuel cells(MFC) use micro-organisms called anode-respiring bacteria(ARB) to convert chemical energy into electrical energy. This process can not only treat wastewater but can also produce useful byproduct hydrogen peroxide(H2O2). Process variables like anode potential and pH play important role in the MFC operation and the focus of this dissertation are pH and potential control problems. Most of the adaptive pH control solutions use signal-based-norms as cost functions, but their strong dependency on excitation signal properties makes them sensitive to noise, disturbances, and modeling errors. System-based-norm( H-infinity) cost functions provide a viable alternative for the adaptation as they are less susceptible to the signal properties. Two variants of adaptive pH control algorithms that use approximate H-infinity frequency loop-shaping (FLS) cost metrics are proposed in this dissertation. A pH neutralization process with high retention time is studied using lab scale experiments and the experimental setup is used as a basis to develop a first-principles model. The analysis of such a model shows that only the gain of the process varies significantly with operating conditions and with buffering capacity. Consequently, the adaptation of the controller gain (single parameter) is sufficient to compensate for the variation in process gain and the focus of the proposed algorithms is the adaptation of the PI controller gain. Computer simulations and lab-scale experiments are used to study tracking, disturbance rejection and adaptation performance of these algorithms under different excitation conditions. Results show the proposed algorithm produces optimum that is less dependent on the excitation as compared to a commonly used L2 cost function based algorithm and tracks set-points reasonably well under practical conditions. The proposed direct pH control algorithm is integrated with the combined activated sludge anaerobic digestion model (CASADM) of an MFC and it is shown pH control improves its performance. Analytical grade potentiostats are commonly used in MFC potential control, but, their high cost (>6000)andlargesize,makethemnonviableforthefieldusage.Thisdissertationproposesanalternatelowcost(6000) and large size, make them nonviable for the field usage. This dissertation proposes an alternate low-cost(200) portable potentiostat solution. This potentiostat is tested using a ferricyanide reactor and results show it produces performance close to an analytical grade potentiostat.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Enabling low cost test and tuning of difficult-to-measure device specifications: application to DC-DC converters and high speed devices

    Get PDF
    Low-cost test and tuning methods for difficult-to-measure specifications are presented in this research from the following perspectives: 1)"Safe" test and self-tuning for power converters: To avoid the risk of device under test (DUT) damage during conventional load/line regulation measurement on power converter, a "safe" alternate test structure is developed where the power converter (boost/buck converter) is placed in a different mode of operation during alternative test (light switching load) as opposed to standard test (heavy switching load) to prevent damage to the DUT during manufacturing test. Based on the alternative test structure, self-tuning methods for both boost and buck converters are also developed in this thesis. In addition, to make these test structures suitable for on-chip built-in self-test (BIST) application, a special sensing circuit has been designed and implemented. Stability analysis filters and appropriate models are also implemented to predict the DUT’s electrical stability condition during test and to further predict the values of tuning knobs needed for the tuning process. 2) High bandwidth RF signal generation: Up-convertion has been widely used in high frequency RF signal generation but mixer nonlinearity results in signal distortion that is difficult to eliminate with such methods. To address this problem, a framework for low-cost high-fidelity wideband RF signal generation is developed in this thesis. Depending on the band-limited target waveform, the input data for two interleaved DACs (digital-to-analog converters) system is optimized by a matrix-model-based algorithm in such a way that it minimizes the distortion between one of its image replicas in the frequency domain and the target RF waveform within a specified signal bandwidth. The approach is used to demonstrate how interferers with specified frequency characteristics can be synthesized at low cost for interference testing of RF communications systems. The frameworks presented in this thesis have a significant impact in enabling low-cost test and tuning of difficult-to-measure device specifications for power converter and high-speed devices.Ph.D
    corecore