21 research outputs found
Recommended from our members
Formal Analysis of Arithmetic Circuits using Computer Algebra - Verification, Abstraction and Reverse Engineering
Despite a considerable progress in verification and abstraction of random and control logic, advances in formal verification of arithmetic designs have been lagging. This can be attributed mostly to the difficulty in an efficient modeling of arithmetic circuits and datapaths without resorting to computationally expensive Boolean methods, such as Binary Decision Diagrams (BDDs) and Boolean Satisfiability (SAT), that require “bit blasting”, i.e., flattening the design to a bit-level netlist. Approaches that rely on computer algebra and Satisfiability Modulo Theories (SMT) methods are either too abstract to handle the bit-level nature of arithmetic designs or require solving computationally expensive decision or satisfiability problems. The work proposed in this thesis aims at overcoming the limitations of analyzing arithmetic circuits, specifically at the post-synthesized phase. It addresses the verification, abstraction and reverse engineering problems of arithmetic circuits at an algebraic level, treating an arithmetic circuit and its specification as a properly constructed algebraic system. The proposed technique solves these problems by function extraction, i.e., by deriving arithmetic function computed by the circuit from its low-level circuit implementation using computer algebraic rewriting technique. The proposed techniques work on large integer arithmetic circuits and finite field arithmetic circuits, up to 512-bit wide containing millions of logic gates
Performance Estimation of Synthesis Flows cross Technologies using LSTMs and Transfer Learning
Due to the increasing complexity of Integrated Circuits (ICs) and
System-on-Chip (SoC), developing high-quality synthesis flows within a short
market time becomes more challenging. We propose a general approach that
precisely estimates the Quality-of-Result (QoR), such as delay and area, of
unseen synthesis flows for specific designs. The main idea is training a
Recurrent Neural Network (RNN) regressor, where the flows are inputs and QoRs
are ground truth. The RNN regressor is constructed with Long Short-Term Memory
(LSTM) and fully-connected layers. This approach is demonstrated with 1.2
million data points collected using 14nm, 7nm regular-voltage (RVT), and 7nm
low-voltage (LVT) FinFET technologies with twelve IC designs. The accuracy of
predicting the QoRs (delay and area) within one technology is
\textbf{98.0}\% over 240,000 test points. To enable
accurate predictions cross different technologies and different IC designs, we
propose a transfer-learning approach that utilizes the model pre-trained with
14nm datasets. Our transfer learning approach obtains estimation accuracy
96.3\% over 960,000 test points, using only 100 data points for
training
Rubik's Optical Neural Networks: Multi-task Learning with Physics-aware Rotation Architecture
Recently, there are increasing efforts on advancing optical neural networks
(ONNs), which bring significant advantages for machine learning (ML) in terms
of power efficiency, parallelism, and computational speed. With the
considerable benefits in computation speed and energy efficiency, there are
significant interests in leveraging ONNs into medical sensing, security
screening, drug detection, and autonomous driving. However, due to the
challenge of implementing reconfigurability, deploying multi-task learning
(MTL) algorithms on ONNs requires re-building and duplicating the physical
diffractive systems, which significantly degrades the energy and cost
efficiency in practical application scenarios. This work presents a novel ONNs
architecture, namely, \textit{RubikONNs}, which utilizes the physical
properties of optical systems to encode multiple feed-forward functions by
physically rotating the hardware similarly to rotating a \textit{Rubik's Cube}.
To optimize MTL performance on RubikONNs, two domain-specific physics-aware
training algorithms \textit{RotAgg} and \textit{RotSeq} are proposed. Our
experimental results demonstrate more than 4 improvements in energy and
cost efficiency with marginal accuracy degradation compared to the
state-of-the-art approaches.Comment: To appear at 32nd International Joint Conference on Artificial
Intelligence (IJCAI'23
Verilog-to-PyG -- A Framework for Graph Learning and Augmentation on RTL Designs
The complexity of modern hardware designs necessitates advanced methodologies
for optimizing and analyzing modern digital systems. In recent times, machine
learning (ML) methodologies have emerged as potent instruments for assessing
design quality-of-results at the Register-Transfer Level (RTL) or Boolean
level, aiming to expedite design exploration of advanced RTL configurations. In
this presentation, we introduce an innovative open-source framework that
translates RTL designs into graph representation foundations, which can be
seamlessly integrated with the PyTorch Geometric graph learning platform.
Furthermore, the Verilog-to-PyG (V2PYG) framework is compatible with the
open-source Electronic Design Automation (EDA) toolchain OpenROAD, facilitating
the collection of labeled datasets in an utterly open-source manner.
Additionally, we will present novel RTL data augmentation methods (incorporated
in our framework) that enable functional equivalent design augmentation for the
construction of an extensive graph-based RTL design database. Lastly, we will
showcase several using cases of V2PYG with detailed scripting examples. V2PYG
can be found at \url{https://yu-maryland.github.io/Verilog-to-PyG/}.Comment: 8 pages, International Conference on Computer-Aided Design (ICCAD'23