17 research outputs found

    Post-silicon Receiver Equalization Metamodeling by Artificial Neural Networks

    Get PDF
    As microprocessor design scales to the 10 nm technology and beyond, traditional pre- and post-silicon validation techniques are unsuitable to get a full system functional coverage. Physical complexity and extreme technology process variations severely limits the effectiveness and reliability of pre-silicon validation techniques. This scenario imposes the need of sophisticated post-silicon validation approaches to consider complex electromagnetic phenomena and large manufacturing fluctuations observed in actual physical platforms. One of the major challenges in electrical validation of high-speed input/output (HSIO) links in modern computer platforms lies in the physical layer (PHY) tuning process, where equalization techniques are used to cancel undesired effects induced by the channels. Current industrial practices for PHY tuning in HSIO links are very time consuming since they require massive lab measurements. An alternative is to use machine learning techniques to model the PHY, and then perform equalization using the resultant surrogate model. In this paper, a metamodeling approach based on neural networks is proposed to efficiently simulate the effects of a receiver equalizer PHY tuning settings. We use several design of experiments techniques to find a neural model capable of approximating the real system behavior without requiring a large amount of actual measurements. We evaluate the models performance by comparing with measured responses on a real server HSIO link

    Applications of Broyden-based input space mapping to modeling and design optimization in high-tech companies in Mexico

    Get PDF
    One of the most powerful and computationally efficient optimization approaches in RF and microwave engineering is the space mapping (SM) approach to design. SM optimization methods belong to the general class of surrogate-based optimization algorithms. They are specialized on the efficient optimization of computationally expensive models. This paper reviews the Broyden-based input SM algorithm, better known as aggressive space mapping (ASM), which is perhaps the SM variation with more industrial applications. The two main characteristics that explain its popularity in industry and academia are emphasized in this paper: simplicity and efficiency. The fundamentals behind the Broyden-based input SM algorithm are described, highlighting key steps for its successful implementation, as well as situations where it may fail. Recent applications of the Broyden-based input space mapping algorithm in high-tech industries located in Mexico are briefly described, including application areas such as signal integrity and high-speed interconnect design, as well as post-silicon validation of high-performance computer platforms, among others. Emerging new applications in multi-physics interconnect design and power-integrity design optimization are also mentioned.ITESO, A.C

    Transmitter and Receiver Equalizers Optimization Methodologies for High-Speed Links in Industrial Computer Platforms Post-Silicon Validation

    Get PDF
    As microprocessor design scales to nanometric technology, traditional post-silicon validation techniques are inappropriate to get a full system functional coverage. Physical complexity and extreme technology process variations introduce design challenges to guarantee performance over process, voltage, and temperature conditions. In addition, there is an increasingly higher number of mixed-signal circuits within microprocessors. Many of them correspond to high-speed input/output (HSIO) links. Improvements in signaling methods, circuits, and process technology have allowed HSIO data rates to scale beyond 10 Gb/s, where undesired effects can create multiple signal integrity problems. With all of these elements, post-silicon validation of HSIO links is tough and time-consuming. One of the major challenges in electrical validation of HSIO links lies in the physical layer (PHY) tuning process, where equalization techniques are used to cancel these undesired effects. Typical current industrial practices for PHY tuning require massive lab measurements, since they are based on exhaustive enumeration methods. In this work, direct and surrogate-based optimization methods, including space mapping, are proposed based on suitable objective functions to efficiently tune the transmitter and receiver equalizers. The proposed methodologies are evaluated by lab measurements on realistic industrial post-silicon validation platforms, confirming dramatic speed up in PHY tuning and substantial performance improvement

    System-Level Measurement-Based Design Optimization by Space Mapping Technology

    Get PDF
    Space mapping arose from the need to implement fast and accurate design optimization of microwave structures using full-wave EM simulators. Space mapping optimization later proved effective in disciplines well beyond RF and microwave engineering. The underlying coarse and fine models of the optimized structures have been implemented using a variety of EDA tools. More recently, measurement-based physical platforms have also been employed as “fine models.” Most space-mapping-based optimization cases have been demonstrated at the device-, component-, or circuit-level. However, the application of space mapping to high-fidelity system-level design optimization is just emerging. Optimizing highly accurate systems based on physical measurements is particularly challenging, since they are typically subject to statistical fluctuations and varying operating or environmental conditions. Here, we illustrate emerging demonstrations of space mapping system-level measurement-based design optimization in the area of signal integrity for high-speed computer platforms. Other measurement-based space mapping cases are also considered. Unresolved challenges are highlighted and potential general solutions are ventured.ITESO, A.C

    PCIe Gen5 Physical Layer Equalization Tuning by Using K-means Clustering and Gaussian Process Regression Modeling in Industrial Post-silicon Validation

    Get PDF
    Peripheral component interconnect express (PCIe) is a high-performance interconnect architecture widely adopted in the computer industry. The continuously increasing bandwidth demand from new applications has led to the development of the PCIe Gen5, reaching data rates of 32 GT/s. To mitigate undesired channel effects due to such high-speed, the PCIe specification defines an equalization process at the transmitter (Tx) and the receiver (Rx). Current post-silicon validation practices consist of finding an optimal subset of Tx and Rx coefficients by measuring the eye diagrams across different channels. However, these experiments are very time consuming since they require massive lab measurements. In this paper, we use a K-means approach to cluster all available post-silicon data from different channels and feed those clusters to a Gaussian process regression (GPR)-based metamodel for each channel. We then perform a surrogate-based optimization to obtain the optimal tuning settings for the specific channels. Our methodology is validated by measurements of the functional eye diagram of an industrial computer platform.ITESO, A.C

    High-Speed Links Receiver Optimization in Post-Silicon Validation Exploiting Broyden-based Input Space Mapping

    Get PDF
    One of the major challenges in high-speed input/output (HSIO) links electrical validation is the physical layer (PHY) tuning process. Equalization techniques are employed to cancel any undesired effect. Typical industrial practices require massive lab measurements, making the equalization process very time consuming. In this paper, we exploit the Broyden-based input space mapping (SM) algorithm to efficiently optimize the PHY tuning receiver (Rx) equalizer settings for a SATA Gen 3 channel topology. We use a good-enough surrogate model as the coarse model, and an industrial post-silicon validation physical platform as the fine model. A map between the coarse and the fine model Rx equalizer settings is implicitly built, yielding an accelerated SM-based optimization of the PHY tuning process

    System Margining Surrogate-Based Optimization in Post-Silicon Validation

    Get PDF
    There is an increasingly higher number of mixed-signal circuits within microprocessors. A significant portion of them corresponds to high-speed input/output (HSIO) links. Post-silicon validation of HSIO links is critical to provide a release qualification decision. One of the major challenges in HSIO electrical validation is the physical layer (PHY) tuning process, where equalization techniques are typically used to cancel any undesired effect. Current industrial practices for PHY tuning in HSIO links are very time consuming since they require massive lab measurements. On the other hand, surrogate modeling techniques allow to develop an approximation of a system response within a design space of interest. In this paper, we analyze several surrogate modeling methods and design of experiments techniques to identify the best approach to efficiently optimize a receiver equalizer. We evaluate the models performance by comparing with actual measured responses on a real server HSIO link. We then perform a surrogate-based optimization on the best model to obtain the optimal PHY tuning settings of a HSIO link. Our methodology is validated by measuring the real functional eye diagram of the physical system using the optimal surrogate model solution

    PAM4 Transmitter and Receiver Equalizers Optimization for High-Speed Serial Links

    Get PDF
    As the telecommunications markets evolves, the demand of faster data transfers and processing continue to increase. In order to confront this demand, the peripheral component interconnect express (PCIe) has been increasing the data rates from PCIe Gen 1(4 Gb/s) to PCIe Gen 5(32 Gb/s). This evolution has brought new challenges due to the high-speed interconnections effects which can cause data loss and intersymbol interference. Under these conditions the traditional non return to zero modulation (NRZ) scheme became a bottle neck due to bandwidth limitations in the high-speed interconnects. The pulse amplitude modulation 4-level (PAM4) scheme is been implemented in next generation of PCIe (PCIe6) doubling the data rate without increasing the channel bandwidth. However, while PAM4 solve the bandwidth problem it also brings new challenges in post silicon equalization. Tuning the transmitter (Tx) and receiver (Rx) across different interconnect channels can be a very time-consuming task due to multiple equalizers implemented in the serializer/deserializer (SerDes). Typical current industrial practices for SerDes equalizers tuning require massive lab measurements, since they are based on exhaustive enumeration methods, making the equalization process too lengthy and practically prohibitive under current silicon time-to-market commitments. In this master’s dissertation a numerical method is proposed to optimize the transmitter and receiver equalizers of a PCIe6 link. The experimental results, tested in a MATLAB simulation environment, demonstrate the effectiveness of the proposed approach by delivering optimal PAM4 eye diagrams margins while significantly reducing the jitter.ITESO, A.C

    Machine Learning Techniques for Electrical Validation Enhancement Processes

    Get PDF
    Post-Silicon system margin validation consumes a significant amount of time and resources. To overcome this, a reduced validation plan for derivative products has previously been used. However, a certain amount of validation is still needed to avoid escapes, which is prone to subjective bias by the validation engineer comparing a reduced set of derivative validation data against the base product data. Machine Learning techniques allow, to perform automatic decisions and predictions based on already available historical data. In this work, we present an efficient methodology implemented with Machine Learning to make an automatic risk assessment decision and eye margin estimation measurements for derivative products, considering a large set of parameters obtained from the base product. The proposed methodology yields a high performance on the risk assessment decision and the estimation by regression, which translates into a significant reduction in time, effort, and resources
    corecore