9,310 research outputs found

    FuncTeller: How Well Does eFPGA Hide Functionality?

    Full text link
    Hardware intellectual property (IP) piracy is an emerging threat to the global supply chain. Correspondingly, various countermeasures aim to protect hardware IPs, such as logic locking, camouflaging, and split manufacturing. However, these countermeasures cannot always guarantee IP security. A malicious attacker can access the layout/netlist of the hardware IP protected by these countermeasures and further retrieve the design. To eliminate/bypass these vulnerabilities, a recent approach redacts the design's IP to an embedded field-programmable gate array (eFPGA), disabling the attacker's access to the layout/netlist. eFPGAs can be programmed with arbitrary functionality. Without the bitstream, the attacker cannot recover the functionality of the protected IP. Consequently, state-of-the-art attacks are inapplicable to pirate the redacted hardware IP. In this paper, we challenge the assumed security of eFPGA-based redaction. We present an attack to retrieve the hardware IP with only black-box access to a programmed eFPGA. We observe the effect of modern electronic design automation (EDA) tools on practical hardware circuits and leverage the observation to guide our attack. Thus, our proposed method FuncTeller selects minterms to query, recovering the circuit function within a reasonable time. We demonstrate the effectiveness and efficiency of FuncTeller on multiple circuits, including academic benchmark circuits, Stanford MIPS processor, IBEX processor, Common Evaluation Platform GPS, and Cybersecurity Awareness Worldwide competition circuits. Our results show that FuncTeller achieves an average accuracy greater than 85% over these tested circuits retrieving the design's functionality.Comment: To be published in the proceedings of the 32st USENIX Security Symposium, 202

    Design for pre-bond testability in 3D integrated circuits

    Get PDF
    In this dissertation we propose several DFT techniques specific to 3D stacked IC systems. The goal has explicitly been to create techniques that integrate easily with existing IC test systems. Specifically, this means utilizing scan- and wrapper-based techniques, two foundations of the digital IC test industry. First, we describe a general test architecture for 3D ICs. In this architecture, each tier of a 3D design is wrapped in test control logic that both manages tier test pre-bond and integrates the tier into the large test architecture post-bond. We describe a new kind of boundary scan to provide the necessary test control and observation of the partial circuits, and we propose a new design methodology for test hardcore that ensures both pre-bond functionality and post-bond optimality. We present the application of these techniques to the 3D-MAPS test vehicle, which has proven their effectiveness. Second, we extend these DFT techniques to circuit-partitioned designs. We find that boundary scan design is generally sufficient, but that some 3D designs require special DFT treatment. Most importantly, we demonstrate that the functional partitioning inherent in 3D design can potentially decrease the total test cost of verifying a circuit. Third, we present a new CAD algorithm for designing 3D test wrappers. This algorithm co-designs the pre-bond and post-bond wrappers to simultaneously minimize test time and routing cost. On average, our algorithm utilizes over 90% of the wires in both the pre-bond and post-bond wrappers. Finally, we look at the 3D vias themselves to develop a low-cost, high-volume pre-bond test methodology appropriate for production-level test. We describe the shorting probes methodology, wherein large test probes are used to contact multiple small 3D vias. This technique is an all-digital test method that integrates seamlessly into existing test flows. Our experimental results demonstrate two key facts: neither the large capacitance of the probe tips nor the process variation in the 3D vias and the probe tips significantly hinders the testability of the circuits. Taken together, this body of work defines a complete test methodology for testing 3D ICs pre-bond, eliminating one of the key hurdles to the commercialization of 3D technology.PhDCommittee Chair: Lee, Hsien-Hsin; Committee Member: Bakir, Muhannad; Committee Member: Lim, Sung Kyu; Committee Member: Vuduc, Richard; Committee Member: Yalamanchili, Sudhaka

    An innovative two-stage data compression scheme using adaptive block merging technique

    Get PDF
    Test data has increased enormously owing to the rising on-chip complexity of integrated circuits. It further increases the test data transportation time and tester memory. The non-correlated test bits increase the issue of the test power. This paper presents a two-stage block merging based test data minimization scheme which reduces the test bits, test time and test power. A test data is partitioned into blocks of fixed sizes which are compressed using two-stage encoding technique. In stage one, successive blocks are merged to retain a representative block. In stage two, the retained pattern block is further encoding based on the existence of ten different subcases between the sub-block formed by splitting the retained pattern block into two halves. Non-compatible blocks are also split into two sub-blocks and tried for encoded using lesser bits. Decompression architecture to retrieve the original test data is presented. Simulation results obtained corresponding to different ISCAS′89 benchmarks circuits reflect its effectiveness in achieving better compression

    Reliable Design of Three-Dimensional Integrated Circuits

    Get PDF

    Evaluation of reaction kinetics and material properties of cementitious ceramic materials using ultrasonic velocity and attenuation measurements

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Ultrasonic velocity and attenuation measurements have been used to characterise a range of phosphate bonded, alumina filled, magnesia ceramics and other ceramic materials... Measurements were made over a range of frequency from 50kHz - 1 OM Hz, using a variety of commercial probes and equipment, and a variety of techniques. An ultrasonic double-probe method was used to monitor the setting process of the cementitious ceramics using commercial 2.25MHz and 2MHz transducers, for compressional and shear wave modes, respectively, in samples with alumina content in the range of 0 - 60 wt 0/0. The elastic properties of the material were determined from ultrasonic velocity measurements and were found to be dependent upon the filler volume fraction. The measured elastic moduli were found to Increase as porosity decreased, and this effect might possibly be used to estimate porosity. The composition dependence of the elastic moduli is compared with the Hashin and Shtrikman theoretical bounds for the elastic moduli of two-phase materials. All data lie between these bounds, suggesting that the alumina particles were well dispersed and well bonded to the matrix. However, the fact that the data are slightly above the lower bound suggested that the particles are not spherical, and this, together with other evidence obtained from an analysis of reaction rates, indicates the predominence of plate-like gram structures.Science and Engineering Research Council, Central Research Labroratories, and BA Chemicals Ltd

    Automated Debugging Methodology for FPGA-based Systems

    Get PDF
    Electronic devices make up a vital part of our lives. These are seen from mobiles, laptops, computers, home automation, etc. to name a few. The modern designs constitute billions of transistors. However, with this evolution, ensuring that the devices fulfill the designer’s expectation under variable conditions has also become a great challenge. This requires a lot of design time and effort. Whenever an error is encountered, the process is re-started. Hence, it is desired to minimize the number of spins required to achieve an error-free product, as each spin results in loss of time and effort. Software-based simulation systems present the main technique to ensure the verification of the design before fabrication. However, few design errors (bugs) are likely to escape the simulation process. Such bugs subsequently appear during the post-silicon phase. Finding such bugs is time-consuming due to inherent invisibility of the hardware. Instead of software simulation of the design in the pre-silicon phase, post-silicon techniques permit the designers to verify the functionality through the physical implementations of the design. The main benefit of the methodology is that the implemented design in the post-silicon phase runs many order-of-magnitude faster than its counterpart in pre-silicon. This allows the designers to validate their design more exhaustively. This thesis presents five main contributions to enable a fast and automated debugging solution for reconfigurable hardware. During the research work, we used an obstacle avoidance system for robotic vehicles as a use case to illustrate how to apply the proposed debugging solution in practical environments. The first contribution presents a debugging system capable of providing a lossless trace of debugging data which permits a cycle-accurate replay. This methodology ensures capturing permanent as well as intermittent errors in the implemented design. The contribution also describes a solution to enhance hardware observability. It is proposed to utilize processor-configurable concentration networks, employ debug data compression to transmit the data more efficiently, and partially reconfiguring the debugging system at run-time to save the time required for design re-compilation as well as preserve the timing closure. The second contribution presents a solution for communication-centric designs. Furthermore, solutions for designs with multi-clock domains are also discussed. The third contribution presents a priority-based signal selection methodology to identify the signals which can be more helpful during the debugging process. A connectivity generation tool is also presented which can map the identified signals to the debugging system. The fourth contribution presents an automated error detection solution which can help in capturing the permanent as well as intermittent errors without continuous monitoring of debugging data. The proposed solution works for designs even in the absence of golden reference. The fifth contribution proposes to use artificial intelligence for post-silicon debugging. We presented a novel idea of using a recurrent neural network for debugging when a golden reference is present for training the network. Furthermore, the idea was also extended to designs where golden reference is not present

    Workshop on Advanced Technologies for Planetary Instruments, part 1

    Get PDF
    This meeting was conceived in response to new challenges facing NASA's robotic solar system exploration program. This volume contains papers presented at the Workshop on Advanced Technologies for Planetary Instruments on 28-30 Apr. 1993. This meeting was conceived in response to new challenges facing NASA's robotic solar system exploration program. Over the past several years, SDIO has sponsored a significant technology development program aimed, in part, at the production of instruments with these characteristics. This workshop provided an opportunity for specialists from the planetary science and DoD communities to establish contacts, to explore common technical ground in an open forum, and more specifically, to discuss the applicability of SDIO's technology base to planetary science instruments

    The contour tree image encoding technique and file format

    Get PDF
    The process of contourization is presented which converts a raster image into a discrete set of plateaux or contours. These contours can be grouped into a hierarchical structure, defining total spatial inclusion, called a contour tree. A contour coder has been developed which fully describes these contours in a compact and efficient manner and is the basis for an image compression method. Simplification of the contour tree has been undertaken by merging contour tree nodes thus lowering the contour tree's entropy. This can be exploited by the contour coder to increase the image compression ratio. By applying general and simple rules derived from physiological experiments on the human vision system, lossy image compression can be achieved which minimises noticeable artifacts in the simplified image. The contour merging technique offers a complementary lossy compression system to the QDCT (Quantised Discrete Cosine Transform). The artifacts introduced by the two methods are very different; QDCT produces a general blurring and adds extra highlights in the form of overshoots, whereas contour merging sharpens edges, reduces highlights and introduces a degree of false contouring. A format based on the contourization technique which caters for most image types is defined, called the contour tree image format. Image operations directly on this compressed format have been studied which for certain manipulations can offer significant operational speed increases over using a standard raster image format. A couple of examples of operations specific to the contour tree format are presented showing some of the features of the new format.Science and Engineering Research Counci
    • …
    corecore