30 research outputs found

    Transmitter and Receiver Equalizers Optimization Methodologies for High-Speed Links in Industrial Computer Platforms Post-Silicon Validation

    Get PDF
    As microprocessor design scales to nanometric technology, traditional post-silicon validation techniques are inappropriate to get a full system functional coverage. Physical complexity and extreme technology process variations introduce design challenges to guarantee performance over process, voltage, and temperature conditions. In addition, there is an increasingly higher number of mixed-signal circuits within microprocessors. Many of them correspond to high-speed input/output (HSIO) links. Improvements in signaling methods, circuits, and process technology have allowed HSIO data rates to scale beyond 10 Gb/s, where undesired effects can create multiple signal integrity problems. With all of these elements, post-silicon validation of HSIO links is tough and time-consuming. One of the major challenges in electrical validation of HSIO links lies in the physical layer (PHY) tuning process, where equalization techniques are used to cancel these undesired effects. Typical current industrial practices for PHY tuning require massive lab measurements, since they are based on exhaustive enumeration methods. In this work, direct and surrogate-based optimization methods, including space mapping, are proposed based on suitable objective functions to efficiently tune the transmitter and receiver equalizers. The proposed methodologies are evaluated by lab measurements on realistic industrial post-silicon validation platforms, confirming dramatic speed up in PHY tuning and substantial performance improvement

    PCIe Gen5 Physical Layer Equalization Tuning by Using K-means Clustering and Gaussian Process Regression Modeling in Industrial Post-silicon Validation

    Get PDF
    Peripheral component interconnect express (PCIe) is a high-performance interconnect architecture widely adopted in the computer industry. The continuously increasing bandwidth demand from new applications has led to the development of the PCIe Gen5, reaching data rates of 32 GT/s. To mitigate undesired channel effects due to such high-speed, the PCIe specification defines an equalization process at the transmitter (Tx) and the receiver (Rx). Current post-silicon validation practices consist of finding an optimal subset of Tx and Rx coefficients by measuring the eye diagrams across different channels. However, these experiments are very time consuming since they require massive lab measurements. In this paper, we use a K-means approach to cluster all available post-silicon data from different channels and feed those clusters to a Gaussian process regression (GPR)-based metamodel for each channel. We then perform a surrogate-based optimization to obtain the optimal tuning settings for the specific channels. Our methodology is validated by measurements of the functional eye diagram of an industrial computer platform.ITESO, A.C

    Machine learning techniques and space mapping approaches to enhance signal and power integrity in high-speed links and power delivery networks

    Get PDF
    Enhancing signal integrity (SI) and reliability in modern computer platforms heavily depends on the post-silicon validation of high-speed input/output (HSIO) links, which implies a physical layer (PHY) tuning process where equalization techniques are employed. On the other hand, the interaction between SI and power delivery networks (PDN) is becoming crucial in the computer industry, imposing the need of computationally expensive models to also ensure power integrity (PI). In this paper, surrogate-based optimization (SBO) methods, including space mapping (SM), are applied to efficiently tune equalizers in HSIO links using lab measurements on industrial post-silicon validation platforms, speeding up the PHY tuning process while enhancing eye diagram margins. Two HSIO interfaces illustrate the proposed SBO/SM techniques: USB3 Gen 1 and SATA Gen 3. Additionally, a methodology based on parameter extraction is described to develop fast PDN lumped models for low-cost SI-PI co-simulation; a dual data rate (DDR) memory sub-system illustrates this methodology. Finally, we describe a surrogate modeling methodology for efficient PDN optimization, comparing several machine learning techniques; a PDN voltage regulator with dual power rail remote sensing illustrates this last methodology.ITESO, A.C

    PAM4 Transmitter and Receiver Equalizers Optimization for High-Speed Serial Links

    Get PDF
    As the telecommunications markets evolves, the demand of faster data transfers and processing continue to increase. In order to confront this demand, the peripheral component interconnect express (PCIe) has been increasing the data rates from PCIe Gen 1(4 Gb/s) to PCIe Gen 5(32 Gb/s). This evolution has brought new challenges due to the high-speed interconnections effects which can cause data loss and intersymbol interference. Under these conditions the traditional non return to zero modulation (NRZ) scheme became a bottle neck due to bandwidth limitations in the high-speed interconnects. The pulse amplitude modulation 4-level (PAM4) scheme is been implemented in next generation of PCIe (PCIe6) doubling the data rate without increasing the channel bandwidth. However, while PAM4 solve the bandwidth problem it also brings new challenges in post silicon equalization. Tuning the transmitter (Tx) and receiver (Rx) across different interconnect channels can be a very time-consuming task due to multiple equalizers implemented in the serializer/deserializer (SerDes). Typical current industrial practices for SerDes equalizers tuning require massive lab measurements, since they are based on exhaustive enumeration methods, making the equalization process too lengthy and practically prohibitive under current silicon time-to-market commitments. In this master’s dissertation a numerical method is proposed to optimize the transmitter and receiver equalizers of a PCIe6 link. The experimental results, tested in a MATLAB simulation environment, demonstrate the effectiveness of the proposed approach by delivering optimal PAM4 eye diagrams margins while significantly reducing the jitter.ITESO, A.C

    Applications of Broyden-based input space mapping to modeling and design optimization in high-tech companies in Mexico

    Get PDF
    One of the most powerful and computationally efficient optimization approaches in RF and microwave engineering is the space mapping (SM) approach to design. SM optimization methods belong to the general class of surrogate-based optimization algorithms. They are specialized on the efficient optimization of computationally expensive models. This paper reviews the Broyden-based input SM algorithm, better known as aggressive space mapping (ASM), which is perhaps the SM variation with more industrial applications. The two main characteristics that explain its popularity in industry and academia are emphasized in this paper: simplicity and efficiency. The fundamentals behind the Broyden-based input SM algorithm are described, highlighting key steps for its successful implementation, as well as situations where it may fail. Recent applications of the Broyden-based input space mapping algorithm in high-tech industries located in Mexico are briefly described, including application areas such as signal integrity and high-speed interconnect design, as well as post-silicon validation of high-performance computer platforms, among others. Emerging new applications in multi-physics interconnect design and power-integrity design optimization are also mentioned.ITESO, A.C

    System-Level Measurement-Based Design Optimization by Space Mapping Technology

    Get PDF
    Space mapping arose from the need to implement fast and accurate design optimization of microwave structures using full-wave EM simulators. Space mapping optimization later proved effective in disciplines well beyond RF and microwave engineering. The underlying coarse and fine models of the optimized structures have been implemented using a variety of EDA tools. More recently, measurement-based physical platforms have also been employed as “fine models.” Most space-mapping-based optimization cases have been demonstrated at the device-, component-, or circuit-level. However, the application of space mapping to high-fidelity system-level design optimization is just emerging. Optimizing highly accurate systems based on physical measurements is particularly challenging, since they are typically subject to statistical fluctuations and varying operating or environmental conditions. Here, we illustrate emerging demonstrations of space mapping system-level measurement-based design optimization in the area of signal integrity for high-speed computer platforms. Other measurement-based space mapping cases are also considered. Unresolved challenges are highlighted and potential general solutions are ventured.ITESO, A.C

    A Holistic Formulation for System Margining and Jitter Tolerance Optimization in Industrial Post-Silicon Validation

    Get PDF
    There is an increasingly higher number of mixed-signal circuits within microprocessors and systems on chip (SoC). A significant portion of them corresponds to high-speed input/output (HSIO) links. Post-silicon validation of HSIO links can be critical for making a product release qualification decision under aggressive launch schedules. The optimization of receiver analog circuitry in modern HSIO links is a very time consuming post-silicon validation process. Current industrial practices are based on exhaustive enumeration methods to improve either the system margins or the jitter tolerance compliance test. In this paper, these two requirements are addressed in a holistic optimization-based approach. We propose a novel objective function based on these two metrics. Our method employs Kriging to build a surrogate model based on system margining and jitter tolerance measurements. The proposed method, tested with three different realistic server HSIO links, is able to deliver optimal system margins and guarantee jitter tolerance compliance while substantially decreasing the typical post-silicon validation time.ITESO, A.C

    Machine Learning Techniques To Mitigate Nonlinear Impairments In Optical Fiber System

    Get PDF
    The upcoming deployment of 5/6G networks, online services like 4k/8k HDTV (streamers and online games), the development of the Internet of Things concept, connecting billions of active devices, as well as the high-speed optical access networks, impose progressively higher and higher requirements on the underlying optical networks infrastructure. With current network infrastructures approaching almost unsustainable levels of bandwidth utilization/ data traffic rates, and the electrical power consumption of communications systems becoming a serious concern in view of our achieving the global carbon footprint targets, network operators and system suppliers are now looking for ways to respond to these demands while also maximizing the returns of their investments. The search for a solution to this predicted ªcapacity crunchº led to a renewed interest in alternative approaches to system design, including the usage of high-order modulation formats and high symbol rates, enabled by coherent detection, development of wideband transmission tools, new fiber types (such as multi-mode and ±core), and finally, the implementation of advanced digital signal processing (DSP) elements to mitigate optical channel nonlinearities and improve the received SNR. All aforementioned options are intended to boost the available optical systems’ capacity to fulfill the new traffic demands. This thesis focuses on the last of these possible solutions to the ªcapacity crunch," answering the question: ªHow can machine learning improve existing optical communications by minimizing quality penalties introduced by transceiver components and fiber media nonlinearity?". Ultimately, by identifying a proper machine learning solution (or a bevy of solutions) to act as a nonlinear channel equalizer for optical transmissions, we can improve the system’s throughput and even reduce the signal processing complexity, which means we can transmit more using the already built optical infrastructure. This problem was broken into four parts in this thesis: i) the development of new machine learning architectures to achieve appealing levels of performance; ii) the correct assessment of computational complexity and hardware realization; iii) the application of AI techniques to achieve fast reconfigurable solutions; iv) the creation of a theoretical foundation with studies demonstrating the caveats and pitfalls of machine learning methods used for optical channel equalization. Common measures such as bit error rate, quality factor, and mutual information are considered in scrutinizing the systems studied in this thesis. Based on simulation and experimental results, we conclude that neural network-based equalization can, in fact, improve the channel quality of transmission and at the same time have computational complexity close to other classic DSP algorithms
    corecore