2 research outputs found

    Artificial Neural Network-Based Compact Modeling Methodology for Advanced Transistors

    No full text
    The artificial neural network (ANN)-based compact modeling methodology is evaluated in the context of advanced field-effect transistor (FET) modeling for Design-Technology-Cooptimization (DTCO) and pathfinding activities. An ANN model architecture for FETs is introduced, and the results clearly show that by carefully choosing the conversion functions (i.e., from ANN outputs to device terminal currents or charges) and the loss functions for ANN training, ANN models can reproduce the current-voltage and charge-voltage characteristics of advanced FETs with excellent accuracy. A few key techniques are introduced in this work to enhance the capabilities of ANN models (e.g., model retargeting, variability modeling) and to improve ANN training efficiency and SPICE simulation turn-around-time (TAT). A systematical study on the impact of the ANN size on ANN model accuracy and SPICE simulation TAT is conducted, and an automated flow for generating optimum ANN models is proposed. The findings in this work suggest that the ANN-based methodology can be a promising compact modeling solution for advanced DTCO and pathfinding activities

    PAC-Net: A Model Pruning Approach to Inductive Transfer Learning

    No full text
    Inductive transfer learning aims to learn from a small amount of training data for the target task by utilizing a pre-trained model from the source task. Most strategies that involve large-scale deep learning models adopt initialization with the pre-trained model and fine-tuning for the target task. However, when using over-parameterized models, we can often prune the model without sacrificing the accuracy of the source task. This motivates us to adopt model pruning for transfer learning with deep learning models. In this paper, we propose PAC-Net, a simple yet effective approach for transfer learning based on pruning. PAC-Net consists of three steps: Prune, Allocate, and Calibrate (PAC). The main idea behind these steps is to identify essential weights for the source task, fine-tune on the source task by updating the essential weights, and then calibrate on the target task by updating the remaining redundant weights. Under the various and extensive set of inductive transfer learning experiments, we show that our method achieves state-of-the-art performance by a large margin
    corecore