4 research outputs found

    A Physical Unclonable Function Based on Inter-Metal Layer Resistance Variations and an Evaluation of its Temperature and Voltage Stability

    Get PDF
    Keying material for encryption is stored as digital bistrings in non-volatile memory (NVM) on FPGAs and ASICs in current technologies. However, secrets stored this way are not secure against a determined adversary, who can use probing attacks to steal the secret. Physical Unclonable functions (PUFs) have emerged as an alternative. PUFs leverage random manufacturing variations as the source of entropy for generating random bitstrings, and incorporate an on-chip infrastructure for measuring and digitizing the corresponding variations in key electrical parameters, such as delay or voltage. PUFs are designed to reproduce a bitstring on demand and therefore eliminate the need for on-chip storage. In this dissertation, I propose a kind of PUF that measures resistance variations in inter-metal layers that define the power grid of the chip and evaluate its temperature and voltage stability. First, I introduce two implementations of a power grid-based PUF (PG-PUF). Then, I analyze the quality of bit strings generated without considering environmental variations from the PG-PUFs that leverage resistance variations in: 1) the power grid metal wires in 60 copies of a 90 nm chip and 2) in the power grid metal wires of 58 copies of a 65 nm chip. Next, I carry out a series of experiments in a set of 63 chips in IBM\u27s 90 nm technology at 9 TV corners, i.e., over all combination of 3 temperatures: -40oC, 25oC and 85oC and 3 voltages: nominal and +/-10% of the nominal supply voltage. The randomness, uniqueness and stability characteristics of bitstrings generated from PG-PUFs are evaluated. The stability of the PG-PUF and an on-chip voltage-to-digital (VDC) are also evaluated at 9 temperature-voltage corners. I introduce several techniques that have not been previously described, including a mechanism to eliminate voltage trends or \u27bias\u27 in the power grid voltage measurements, as well as a voltage threshold, Triple-Module-Redundancy (TMR) and majority voting scheme to identify and exclude unstable bits

    FAST PHYSICAL OBJECT IDENTIFICATION BASED ON UNCLONABLE FEATURES AND SOFT FINGERPRINTING

    No full text
    In this paper we advocate a new technique for the fast identification of physical objects based on their physical unclonable features (surface microstructures). The proposed identification method is based on soft fingerprinting and consists of two stages: at the first stage the list of possible candidates is estimated based on the most reliable bits of a soft fingerprint and the traditional maximum likelihood decoding is applied to the obtained list to find a single best match at the second stage. The soft fingerprint is computed based on random projections with a sign-magnitude decomposition of projected coefficients. The estimate of a bit reliability is deduced directly from the observed coefficients. We investigate different decoding strategies to estimate the list of candidates, which minimize the probability of miss of the right index on the list. The obtained results show the flexibility of the proposed identification method to provide the performance-complexity trade-off. Index Terms — Identification, computational complexity, fingerprint, Physical Unclonable Function (PUF)

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2
    corecore