In the logic synthesis stage, structure transformations in the synthesis tool
need to be combined into optimization sequences and act on the circuit to meet
the specified circuit area and delay. However, logic synthesis optimization
sequences are time-consuming to run, and predicting the quality of the results
(QoR) against the synthesis optimization sequence for a circuit can help
engineers find a better optimization sequence faster. In this work, we propose
a deep learning method to predict the QoR of unseen circuit-optimization
sequences pairs. Specifically, the structure transformations are translated
into vectors by embedding methods and advanced natural language processing
(NLP) technology (Transformer) is used to extract the features of the
optimization sequences. In addition, to enable the prediction process of the
model to be generalized from circuit to circuit, the graph representation of
the circuit is represented as an adjacency matrix and a feature matrix. Graph
neural networks(GNN) are used to extract the structural features of the
circuits. For this problem, the Transformer and three typical GNNs are used.
Furthermore, the Transformer and GNNs are adopted as a joint learning policy
for the QoR prediction of the unseen circuit-optimization sequences. The
methods resulting from the combination of Transformer and GNNs are benchmarked.
The experimental results show that the joint learning of Transformer and
GraphSage gives the best results. The Mean Absolute Error (MAE) of the
predicted result is 0.412