35 research outputs found

    Design of event-driven automatic gain control and high-speed data path for multichannel optical receiver arrays

    Get PDF
    The internet has become the ubiquitous tool that has transformed the lives of all of us. New broadband applications in the field of entertainment, commerce, industry, healthcare and social interactions demand increasingly higher data rates and quality of the networks and ICT infrastructure. In addition, high definition video streaming and cloud services will continue to push the demand for bandwidth. These applications are reshaping the internet into a content-centric network. The challenge is to transform the telecom optical networks and data centers such that they can be scaled efficiently, at low cost. Furthermore, from both an environmental and economic perspective, this scaling should go hand in hand with reduced power consumption. This stems from the desire to reduce CO2 emission and to reduce network operating costs while offering the same service level as today. In the current architecture of the internet, end-users connect to the public network using the access network of an internet service provider (ISP). Today, this access network either reuses the legacy copper or coaxial network or uses passive optical network (PON) technologies, among which the PON is the most energy efficient and provides the highest data rates. Traffic from the access network is aggregated with Ethernet switches and routed to the core network through the provider edge routers, with broadband network gateways (BNGs) to regulate access and usage. These regional links are collectively called the metro network. Data centers connect to the core network using their own dedicated gateway router. The problem of increasing data rates, while reducing the economic and environmental impact, has attracted considerable attention. The research described in this work has been performed in the context of two projects part of the European Union Seventh Framework Programme (FP7), which both aim for higher data rates and tight integration while keeping power consumption low. Mirage targets data center applications while C3PO focuses on medium-reach networks, such as the metro network. Specifically, this research considers two aspects of the high-speed optical receivers used in the communication networks: increasing dynamic range of a linear receiver for multilevel modulation through automatic gain control (AGC) and integration of multiple channels on a single chip with a small area footprint. The data centers of today are high-density computing facilities that provide storage, processing and software as a service to the end-user. They are comprised of gateway routers, a local area network, servers and storage. All of this is organized in racks. The largest units contain over 100 000 servers. The major challenges regarding data centers are scalability and keeping up with increasing amounts of traffic while reducing power consumption (of the devices as well as the associated cooling) and keeping cost minimal. Presently, racks are primarily interconnected with active optical cables (AOCs) which employ signal rates up to 25 Gb/s per lane with non-return-to-zero (NRZ) modulation. A number of technological developments can be employed in AOCs of the future to provide terabit-capacity optical interconnects over longer distances. One such innovation is the use of multilevel modulation formats, which are more bandwidth-efficient than traditional NRZ modulation. Multilevel modulation requires a linear amplifier as front-end of the optical receiver. The greater part of this dissertation discusses the design and implementation of an AGC system for the data path of a linear transimpedance amplifier (TIA). The metro network is the intermediate regional network between the access and core network of the internet architecture, with link lengths up to 500 km. It is estimated that in the near future metro-traffic will increase massively. This growth is attributed mainly to increasing traffic from content delivery networks (CDNs) and data centers, which bypass the core network and directly connect to the metro network. Internet video growth is the major reason for traffic increase. This evolution demands increasingly higher data rates. Today, dense wavelength division multiplexing (DWDM) is widely recognized as being necessary to provide data capacity scalability for future optical networks, as it allows for much higher combined data rates over a single fiber. At the receiver, each wavelength of the demultiplexed incoming light is coupled to a photo diode in a photo diode array which is connected to a dedicated lane of a multichannel receiver. The high number of channels requires small physical channel spacing and tight integration of the diode array with the receiver. In addition, active cooling should be avoided, such that power consumption per receiver lane must be kept low in order not to exceed thermal operation limits. The second component of this work presents the development of an integrated four-channel receiver, targeting 4 Ă— 25 Gb/s data rate, with low power consumption and small footprint to support tight integration with a p-i-n photo diode array with a 250 ÎĽm channel pitch. Chapter 1 discusses the impact of increasing data rates and the desire to reduce power consumption on the design of the optical receiver component, in wide metropolitan area networks as well as in short-reach point-to-point links in data centers. In addition, some aspects of integrated analog circuit design are highlighted: the design flow, transistor hand models, a software design tool. Also, an overview of the process technology is given. Chapter 2 provides essential optical receiver concepts, which are required to understand the remainder of the work. Fundamentals of feedback AGC systems are discussed in the first part of Chapter 3. A basic system model is presented in the continuous-time domain, in which the variable gain amplifier (VGA) constitutes the multistage datapath of a linear optical receiver. To enable reliable reception of multilevel modulation formats, the VGA requires controlled frequency response and in particular limited time-domain overshoot across the gain range. It is argued that this control is hard to achieve with fully analog building blocks. Therefore, an event-driven approach is proposed as an extension of the continuous-time system. Both the structural and behavioral aspects are discussed. The result is a system model of a quantized AGC loop, upon which the system-level design, presented in Chapter 4, is based. In turn, Chapter 5 discusses the detailed implementation of the various building blocks on the circuit level and presents experimental results that confirm the feasibility of the proposed approach. Chapter 6 discusses the design and implementation of a 4 Ă— 25 Gb/s optical receiver array for NRZ modulation with a small area footprint. The focus lies on the input stages and techniques to extend bandwidth and dynamic range are presented. Measurement results for NRZ and optical duobinary (ODB) modulation are presented, as well as the influence of crosstalk on the performance. Finally, Chapter 7 provides an overview of the foremost conclusions of the presented research and includes suggestions for future research. Two appendices are included. Appendix A gives an overview of the general network theorem (GNT), which is used throughout this work and which has been implemented numerically. The results from Appendix B, the analysis of a two-stage opamp compensated with capacitance multipliers, were used to design a building block for the AGC system

    Design and debugging of multi-step analog to digital converters

    Get PDF
    With the fast advancement of CMOS fabrication technology, more and more signal-processing functions are implemented in the digital domain for a lower cost, lower power consumption, higher yield, and higher re-configurability. The trend of increasing integration level for integrated circuits has forced the A/D converter interface to reside on the same silicon in complex mixed-signal ICs containing mostly digital blocks for DSP and control. However, specifications of the converters in various applications emphasize high dynamic range and low spurious spectral performance. It is nontrivial to achieve this level of linearity in a monolithic environment where post-fabrication component trimming or calibration is cumbersome to implement for certain applications or/and for cost and manufacturability reasons. Additionally, as CMOS integrated circuits are accomplishing unprecedented integration levels, potential problems associated with device scaling – the short-channel effects – are also looming large as technology strides into the deep-submicron regime. The A/D conversion process involves sampling the applied analog input signal and quantizing it to its digital representation by comparing it to reference voltages before further signal processing in subsequent digital systems. Depending on how these functions are combined, different A/D converter architectures can be implemented with different requirements on each function. Practical realizations show the trend that to a first order, converter power is directly proportional to sampling rate. However, power dissipation required becomes nonlinear as the speed capabilities of a process technology are pushed to the limit. Pipeline and two-step/multi-step converters tend to be the most efficient at achieving a given resolution and sampling rate specification. This thesis is in a sense unique work as it covers the whole spectrum of design, test, debugging and calibration of multi-step A/D converters; it incorporates development of circuit techniques and algorithms to enhance the resolution and attainable sample rate of an A/D converter and to enhance testing and debugging potential to detect errors dynamically, to isolate and confine faults, and to recover and compensate for the errors continuously. The power proficiency for high resolution of multi-step converter by combining parallelism and calibration and exploiting low-voltage circuit techniques is demonstrated with a 1.8 V, 12-bit, 80 MS/s, 100 mW analog to-digital converter fabricated in five-metal layers 0.18-µm CMOS process. Lower power supply voltages significantly reduce noise margins and increase variations in process, device and design parameters. Consequently, it is steadily more difficult to control the fabrication process precisely enough to maintain uniformity. Microscopic particles present in the manufacturing environment and slight variations in the parameters of manufacturing steps can all lead to the geometrical and electrical properties of an IC to deviate from those generated at the end of the design process. Those defects can cause various types of malfunctioning, depending on the IC topology and the nature of the defect. To relive the burden placed on IC design and manufacturing originated with ever-increasing costs associated with testing and debugging of complex mixed-signal electronic systems, several circuit techniques and algorithms are developed and incorporated in proposed ATPG, DfT and BIST methodologies. Process variation cannot be solved by improving manufacturing tolerances; variability must be reduced by new device technology or managed by design in order for scaling to continue. Similarly, within-die performance variation also imposes new challenges for test methods. With the use of dedicated sensors, which exploit knowledge of the circuit structure and the specific defect mechanisms, the method described in this thesis facilitates early and fast identification of excessive process parameter variation effects. The expectation-maximization algorithm makes the estimation problem more tractable and also yields good estimates of the parameters for small sample sizes. To allow the test guidance with the information obtained through monitoring process variations implemented adjusted support vector machine classifier simultaneously minimize the empirical classification error and maximize the geometric margin. On a positive note, the use of digital enhancing calibration techniques reduces the need for expensive technologies with special fabrication steps. Indeed, the extra cost of digital processing is normally affordable as the use of submicron mixed signal technologies allows for efficient usage of silicon area even for relatively complex algorithms. Employed adaptive filtering algorithm for error estimation offers the small number of operations per iteration and does not require correlation function calculation nor matrix inversions. The presented foreground calibration algorithm does not need any dedicated test signal and does not require a part of the conversion time. It works continuously and with every signal applied to the A/D converter. The feasibility of the method for on-line and off-line debugging and calibration has been verified by experimental measurements from the silicon prototype fabricated in standard single poly, six metal 0.09-µm CMOS process

    High-Performance Fpaa Design For Hierarchical Implementation Of Analog And Mixed-Signal Systems

    Get PDF
    The design complexity of today's IC has increased dramatically due to the high integration allowed by advanced CMOS VLSI process. A key to manage the increased design complexity while meeting the shortening time-to-market is design automation. In digital world, the field-programmable gate arrays (FPGAs) have evolved to play a very important role by providing ASIC-compatible design methodologies that include design-for-testability, design optimization and rapid prototyping. On the analog side, the drive towards shorter design cycles has demanded the development of high performance analog circuits that are configurable and suitable for CAD methodologies. Field-programmable analog arrays (FPAAs) are intended to achieve the benefits for analog system design as FPGAs have in the digital field. Despite of the obvious advantages of hierarchical analog design, namely short time-to-market and low non-recurring engineering (NRE) costs, this approach has some apparent disadvantages. The redundant devices and routing resources for programmability requires extra chip area, while switch and interconnect parasitics cause considerable performance degradation. To deliver a high-performance FPAA, effective methodologies must be developed to minimize those adversary effects. In this dissertation, three important aspects in the FPAA design are studied to achieve that goal: the programming technology, the configurable analog block (CAB) design and the routing architecture design. Enabled by the Laser MakelinkTM technology, which provides nearly ideal programmable switches, channel segmentation algorithms are developed to improve channel routability and reduce interconnect parasitics. Segmented routing are studied and performance metrics accounting for interconnect parasitics are proposed for performance-driven analog routing. For large scale arrays, buffer insertions are considered to further reduce interconnection delay and cross-coupling noise. A high-performance, highly flexible CAB is developed to realized both continuous-mode and switched-capacitor circuits. In the end, the implementation of an 8-bit, 50MSPS pipelined A/D converter using the proposed FPAA is presented as an example of the hierarchical analog design approach, with its key performance specifications discussed

    Architecture, Modeling, and Analysis of a Plasma Impedance Probe

    Get PDF
    Variations in ionospheric plasma density can cause large amplitude and phase changes in the radio waves passing through this region. Ionospheric weather can have detrimental effects on several communication systems, including radars, navigation systems such as the Global Positioning Sytem (GPS), and high-frequency communications. As a result, creating models of the ionospheric density is of paramount interest to scientists working in the field of satellite communication. Numerous empirical and theoretical models have been developed to study the upper atmosphere climatology and weather. Multiple measurements of plasma density over a region are of marked importance while creating these models. The lack of spatially distributed observations in the upper atmosphere is currently a major limitation in space weather research. A constellation of CubeSat platforms would be ideal to take such distributed measurements. The use of miniaturized instruments that can be accommodated on small satellites, such as CubeSats, would be key to acheiving these science goals for space weather. The accepted instrumentation techniques for measuring the electron density are the Langmuir probes and the Plasma Impedance Probe (PIP). While Langmuir probes are able to provide higher resolution measurements of relative electron density, the Plasma Impedance Probes provide absolute electron density measurements irrespective of spacecraft charging. The central goal of this dissertation is to develop an integrated architecture for the PIP that will enable space weather research from CubeSat platforms. The proposed PIP chip integrates all of the major analog and mixed-signal components needed to perform swept-frequency impedance measurements. The design\u27s primary innovation is the integration of matched Analog-to-Digital Converters (ADC) on a single chip for sampling the probes current and voltage signals. A Fast Fourier Transform (FFT) is performed by an off-chip Field-Programmable Gate Array (FPGA) to compute the probes impedance. This provides a robust solution for determining the plasma impedance accurately. The major analog errors and parametric variations affecting the PIP instrument and its effect on the accuracy and precision of the impedance measurement are also studied. The system clock is optimized in order to have a high performance ADC. In this research, an alternative clock generation scheme using C-elements is described to reduce the timing jitter and reference spurs in phase locked loops. While the jitter performance and reference spur reduction is comparable with prior state-of-the-art work, the proposed Phase Locked Loop (PLL) consumes less power with smaller area than previous designs

    Concepts for smart AD and DA converters

    Get PDF
    This thesis studies the `smart' concept for application to analog-to-digital and digital-to-analog converters. The smart concept aims at improving performance - in a wide sense - of AD/DA converters by adding on-chip intelligence to extract imperfections and to correct for them. As the smart concept can correct for certain imperfections, it can also enable the use of more efficient architectures, thus yielding an additional performance boost. Chapter 2 studies trends and expectations in converter design with respect to applications, circuit design and technology evolution. Problems and opportunities are identfied, and an overview of performance criteria is given. Chapter 3 introduces the smart concept that takes advantage of the expected opportunities (described in chapter 2) in order to solve the anticipated problems. Chapter 4 applies the smart concept to digital-to-analog converters. In the discussed example, the concept is applied to reduce the area of the analog core of a current-steering DAC. It is shown that a sub-binary variable-radix approach reduces the area of the current-source elements substantially (10x compared to state-of-the-art), while maintaining accuracy by a self-measurement and digital pre-correction scheme. Chapter 5 describes the chip implementation of the sub-binary variable-radix DAC and discusses the experimental results. The results confirm that the sub-binary variable-radix design can achieve the smallest published current-source-array area for the given accuracy (12bit). Chapter 6 applies the smart concept to analog-to-digital converters, with as main goal the improvement of the overall performance in terms of a widely used figure-of-merit. Open-loop circuitry and time interleaving are shown to be key to achieve high-speed low-power solutions. It is suggested to apply a smart approach to reduce the effect of the imperfections, unintentionally caused by these key factors. On high-level, a global picture of the smart solution is proposed that can solve the problems while still maintaining power-efficiency. Chapter 7 deals with the design of a 500MSps open-loop track-and-hold circuit. This circuit is used as a test case to demonstrate the proposed smart approaches. Experimental results are presented and compared against prior art. Though there are several limitations in the design and the measurement setup, the measured performance is comparable to existing state-of-the-art. Chapter 8 introduces the first calibration method that counteracts the accuracy issues of the open-loop track-and-hold. A description of the method is given, and the implementation of the detection algorithm and correction circuitry is discussed. The chapter concludes with experimental measurement results. Chapter 9 introduces the second calibration method that targets the accuracy issues of time-interleaved circuits, in this case a 2-channel version of the implemented track-and-hold. The detection method, processing algorithm and correction circuitry are analyzed and their implementation is explained. Experimental results verify the usefulness of the method

    Virtual Prototyping Methodology for Power Automation Cyber-Physical-Systems

    Get PDF
    In this thesis, the author proposes a circular system development model which considers all the stages in a typical development process for industrial systems. In particular, the present work shows that the use of virtual prototyping at early stages of the system development may reduce the overall design and verification effort by allowing the exploration of the complete system architecture, and uncovering integration issues early on. The modeling techniques of this research are based on VHDL-AMS, yet supporting other modeling languages such as C/C++, SPICE, and Verilog-AMS, together with integrated simulation tools. Contrasting with conventional approaches, it is shown that the proposed methodology is adapted for small-scale Cyber-Physical Systems (CPS) design and verification thanks to the modularity and scalability of the modeling approach. The proposed modeling techniques enable seamlessly the CPS design together with the implementation of their subsystems. In particular, the contribution of this work improves the virtual prototyping approach that has been successfully used during the development of smart electrical sensors and monitoring equipment for high and medium voltage applications. The design of the measurement and self-calibration circuits of a medium voltage current sensor based on the Rogowski coil transducer is presented as an example. The proposed small-scale CPS design methodology based on virtual prototyping, namely VP-based design methodology, uses important theoretical concepts from layered design, component-based design, and platform-based design. These foundations are the basis to build a modeling methodology that provides a vehicle that can be used to improve system verification towards correct-by-design systems. The main contributions of this research are: the re-definition of the system development lifecycle by using a virtual prototyping methodology; the design and implementation of a model library that maximizes the reuse of computational models and their related IP; and a set of VHDL-AMS modeling guidelines established with the purpose of improving the modularity and scalability of virtual prototypes. These elements are key for supporting the introduction of virtual prototyping into industrial companies that can thoroughly profit from this approach, but cannot commit a specific team to the creation, support, and maintenance of computational models and its dedicated infrastructure. Thanks to the progressive nature of the proposed methodology, virtual prototypes can indeed be introduced with relatively low initial effort and enhanced over time. The presented methodology and its infrastructure may grow into a bidirectional communication medium between non-expert system designers (i.e. system architects and virtual integrators) and domain specialists such as mechanical designers, power electrical designers, embedded-electronics designers, and software designers. The proposed design methodology advocates the reduction of the CPS design complexity by the implementation of a meet-in-the-middle approach for system-level modeling. In this direction, the modeling techniques introduced in this work facilitate the architectural design space exploration, critical cross-domain variable analysis (especially important in the component interfaces), and system-level optimization and verification

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF

    Validation and optimization of analog circuits using randomized search algorithms

    Get PDF
    Analog circuits represent a large percentage of the chips used in mobile computing, communication devices, electric vehicles, and portable medical equipment today. Rapid scaling and shrinking chip geometrics introduce new challenging problems in verification, validation, and optimization of analog circuits. These problems include test generation and compression, runtime monitoring and analyzing the worst-case behaviors. State of the art techniques in Monte Carlo are unable to address these problems effectively. Consequently, designing an efficient and scalable CAD algorithm to address such problems is highly desirable.  In this thesis, we introduce Duplex, a methodology for search and optimization. Duplex supports optimizing nonconvex nonlinear functions and functionals. We use duplex to solve problems in analog validation and machine learning. Duplex uses random tree data structures. Duplex is based on partitioning and separating the problem space into multiple smaller spaces such as input, state and the function space. Duplex simultaneously controls, biases and monitors the growth of the random trees in the partitioned spaces. We have used the duplex framework to solve practical problems in analog and mixed signal validation like directed input stimuli generation, compressing analog stress tests, worst-case eye diagram analysis, performance optimization, machine learning, and monitoring runtime behaviors of analog circuits. We used Duplex for validation and optimization of analog circuits. Duplex automatically generates input stimuli that expose bugs and improves coverage. Duplex automatically finds input corners that result in worst-case eye diagrams. Duplex simultaneously explores the parameter and performance spaces of analog circuits to optimize the circuit for best performance. We monitored the random trees and circuit execution against the specification properties described in formal languages. We formulated many challenging problems in the analog circuits, such as test compression and eye diagram analysis, as functional optimization problems. We use Duplex to solve these functional optimization problems.  We propose the Duplex algorithm as an optimization algorithm to posit the framework to other domains. Duplex can address nonlinear and functional optimization problems in continuous and discrete spaces such as design-space exploration and supervised and unsupervised machine learning. The advantages of the duplex framework are efficiency, scalability and versatility. We consistently show orders of magnitude speedup improvements over the state of the art while objectively improving the quality of results. For generating input stimuli, duplex is the first technique that simultaneously does directed input stimulus generation and increases test coverage. We show over two orders of magnitude speedup over Monte Carlo simulations. For runtime monitoring, we check a large scalable circuit against a very expressive set of formal properties that were not possible to monitor before. For generating worst-case eye diagram, we show at least 20×20\times speedup and better quality of results in comparison to the state of the art. Duplex is the first work to provide transient test compression for analog circuits. We compress stress tests up to 96\%. We optimize analog circuits using Duplex and we show speedup and improved results with respect to the state of the art. We use Duplex to train supervised and unsupervised models and show improved accuracy in all cases

    Low-power Wearable Healthcare Sensors

    Get PDF
    Advances in technology have produced a range of on-body sensors and smartwatches that can be used to monitor a wearer’s health with the objective to keep the user healthy. However, the real potential of such devices not only lies in monitoring but also in interactive communication with expert-system-based cloud services to offer personalized and real-time healthcare advice that will enable the user to manage their health and, over time, to reduce expensive hospital admissions. To meet this goal, the research challenges for the next generation of wearable healthcare devices include the need to offer a wide range of sensing, computing, communication, and human–computer interaction methods, all within a tiny device with limited resources and electrical power. This Special Issue presents a collection of six papers on a wide range of research developments that highlight the specific challenges in creating the next generation of low-power wearable healthcare sensors
    corecore