2 research outputs found

    Bayesian Optimization Approach for Analog Circuit Synthesis Using Neural Network

    Full text link
    Bayesian optimization with Gaussian process as surrogate model has been successfully applied to analog circuit synthesis. In the traditional Gaussian process regression model, the kernel functions are defined explicitly. The computational complexity of training is O(N 3 ), and the computation complexity of prediction is O(N 2 ), where N is the number of training data. Gaussian process model can also be derived from a weight space view, where the original data are mapped to feature space, and the kernel function is defined as the inner product of nonlinear features. In this paper, we propose a Bayesian optimization approach for analog circuit synthesis using neural network. We use deep neural network to extract good feature representations, and then define Gaussian process using the extracted features. Model averaging method is applied to improve the quality of uncertainty prediction. Compared to Gaussian process model with explicitly defined kernel functions, the neural-network-based Gaussian process model can automatically learn a kernel function from data, which makes it possible to provide more accurate predictions and thus accelerate the follow-up optimization procedure. Also, the neural-network-based model has O(N) training time and constant prediction time. The efficiency of the proposed method has been verified by two real-world analog circuits

    An efficient analog circuit sizing method based on machine learning assisted global optimization

    Get PDF
    Machine learning-assisted global optimization methods for speeding up analog integrated circuit sizing is attracting much attention. However, often a few typical analog IC design specifications are considered in most relevant research. When considering the complete set of specifications, two main challenges are yet to be addressed: (1) The prediction error for some performances may be large and the prediction error is accumulated by many performances. This may mislead the optimization and fail the sizing, especially when the specifications are stringent. (2) The machine learning cost could be high considering the number of specifications, considerably canceling out the time saved. A new method, called Efficient Surrogate Model-assisted Sizing Method for High-performance Analog Building Blocks (ESSAB), is proposed in this paper to address the above challenges. The key innovations include a new candidate design ranking method and a new artificial neural network model construction method for analog circuit performances. Experiments using two amplifiers and a comparator with a complete set of stringent design specifications show the advantages of ESSAB
    corecore