569 research outputs found

    Random access memory testing : theory and practice : the gains of fault modelling

    Get PDF

    Secure Split Test for Preventing IC Piracy by Un-Trusted Foundry and Assembly

    Get PDF
    In the era of globalization, integrated circuit design and manufacturing is spread across different continents. This has posed several hardware intrinsic security issues. The issues are related to overproduction of chips without knowledge of designer or OEM, insertion of hardware Trojans at design and fabrication phase, faulty chips getting into markets from test centers, etc. In this thesis work, we have addressed the problem of counterfeit IC‟s getting into the market through test centers. The problem of counterfeit IC has different dimensions. Each problem related to counterfeiting has different solutions. Overbuilding of chips at overseas foundry can be addressed using passive or active metering. The solution to avoid faulty chips getting into open markets from overseas test centers is secure split test (SST). The further improvement to SST is also proposed by other researchers and is known as Connecticut Secure Split Test (CSST). In this work, we focus on improvements to CSST techniques in terms of security, test time and area. In this direction, we have designed all the required sub-blocks required for CSST architecture, namely, RSA, TRNG, Scrambler block, study of benchmark circuits like S38417, adding scan chains to benchmarks is done. Further, as a security measure, we add, XOR gate at the output of the scan chains to obfuscate the signal coming out of the scan chains. Further, we have improved the security of the design by using the PUF circuit instead of TRNG and avoid the use of the memory circuits. This use of PUF not only eliminates the use of memory circuits, but also it provides the way for functional testing also. We have carried out the hamming distance analysis for introduced security measure and results show that security design is reasonably good.Further, as a future work we can focus on: • Developing the circuit which is secuered for the whole semiconductor supply chain with reasonable hamming distance and less area overhead

    Organic ferroelectric diodes

    Get PDF

    Optical Nanofibers for Multiphoton Processes and Selective Mode Interactions with Rubidium

    Get PDF
    Optical nanofibers (ONFs) interfaced with atoms have found numerous applications for the development of quantum technologies. They feature a strong evanescent held at their waist, thereby providing an intense and tightly focused beam over long distances. Thiscan be used to achieve strong interactions between light and matter, enabling trapping, probing, and control of atoms along the waist. However, little experimental work has been done with the higher-order fiber guided modes (HOM). These feature inhomogeneous polarization distributions around the waist and some carry more than h of angular momentum (AM). Owing to the intense held gradient in their evanescent held, ONFs make excellent platforms to excite quadrupole-allowed transitions which could be used to store high-density information encoded on the AM of guided light. We predicted a transition probability up to 6 times stronger than for free-space beams using the fundamental mode and up to 4 times stronger using linearly polarized HOMs. We also studied a singlecolor, two-photon transition at 993 nm between the 5SI/2 and 6S1/2 atomic levels in a hot rubidium vapor and showed its suitability as a frequency reference. We experimentally verihed the particular selection rules for this transition and showed that they may be used to characterize the polarization at the waist of an ONF embedded in a cloud of atoms formed by a magneto-optical trap. Finally, we developed a method to generate HOM-likebeams in free-space, inject them into an ONF, and decompose the modal excitation at the output via transfer matrix calculation. This approach combined with absorption of guided-light by cold atoms may be used to infer the mode excitation at the waist and allow us to selectively excite HOMs.Okinawa Institute of Science and Technology Graduate Universit

    Improving Reliability and Performance of NAND Flash Based Storage System

    Get PDF
    High seek and rotation overhead of magnetic hard disk drive (HDD) motivates development of storage devices, which can offer good random performance. As an alternative technology, NAND flash memory demonstrates low power consumption, microsecond-order access latency and good scalability. Thanks to these advantages, NAND flash based solid state disks (SSD) show many promising applications in enterprise servers. With multi-level cell (MLC) technique, the per-bit fabrication cost is reduced and low production cost enables NAND flash memory to extend its application to the consumer electronics. Despite these advantages, limited memory endurance, long data protection latency and write amplification continue to be the major challenges in the designs of NAND flash storage systems. The limited memory endurance and long data protection latency issue derive from memory bit errors. High bit error rate (BER) severely impairs data integrity and reduces memory durance. The limited endurance is a major obstacle to apply NAND flash memory to the application with high reliability requirement. To protect data integrity, hard-decision error correction codes (ECC) such as Bose-Chaudhuri-Hocquenghem (BCH) are employed. However, the hardware cost becomes prohibitively with the increase of BER when the BCH ECC is employed to extend system lifetime. To extend system lifespan without high hardware cost, we has proposed data pattern aware (DPA) error prevention system design. DPA realizes BER reduction by minimizing the occurrence of data patterns vulnerable to high BER with simple linear feedback shift register circuits. Experimental results show that DPA can increase the system lifetime by up to 4× with marginal hardware cost. With the technology node scaling down to 2Xnm, BER increases up to 0.01. Hard-decision ECCs and DPA are no longer applicable to guarantee data integrity due to either prohibitively high hardware cost or high storage overhead. Soft-decision ECC, such as lowdensity parity check (LDPC) code, has been introduced to provide more powerful error correction capability. However, LDPC code demands extra memory sensing operations, directly leading to long read latency. To reduce LDPC code induced read latency without adverse impact on system reliability, we has proposed FlexLevel NAND flash storage system design. The FlexLevel design reduces BER by broadening the noise margin via threshold voltage (Vth) level reduction. Under relatively low BER, no extra sensing level is required and therefore read performance can be improved. To balance Vth level reduction induced capacity loss and the read speedup, the FlexLevel design identifies the data with high LDPC overhead and only performs Vth reduction to these data. Experimental results show that compared with the best existing works, the proposed design achieves up to 11% read speedup with negligible capacity loss. Write amplification is a major cause to performance and endurance degradation of the NAND flash based storage system. In the object-based NAND flash device (ONFD), write amplification partially results from onode partial update and cascading update. Onode partial update only over-writes partial data of a NAND flash page and incurs unnecessary data migration of the un-updated data. Cascading update is update to object metadata in a cascading manner due to object data update or migration. Even through only several bytes in the object metadata are updated, one or more page has to be re-written, significantly degrading write performance. To minimize write operations incurred by onode partial update and cascading update, we has proposed a Data Migration Minimizing (DMM) device design. The DMM device incorporates 1) the multi-level garbage collection technique to minimize the unnecessary data migration of onode partial update and 2) the virtual B+ tree and diff cache to reduce the write operations incurred by cascading update. The experiment results demonstrate that the DMM device can offer up to 20% write reduction compared with the best state-of-art works

    Direct-Write Ion Beam Lithography

    Get PDF
    Patterning with a focused ion beam (FIB) is an extremely versatile fabrication process that can be used to create microscale and nanoscale designs on the surface of practically any solid sample material. Based on the type of ion-sample interaction utilized, FIB-based manufacturing can be both subtractive and additive, even in the same processing step. Indeed, the capability of easily creating three-dimensional patterns and shaping objects by milling and deposition is probably the most recognized feature of ion beam lithography (IBL) and micromachining. However, there exist several other techniques, such as ion implantation- and ion damage-based patterning and surface functionalization types of processes that have emerged as valuable additions to the nanofabrication toolkit and that are less widely known. While fabrication throughput, in general, is arguably low due to the serial nature of the direct-writing process, speed is not necessarily a problem in these IBL applications that work with small ion doses. Here we provide a comprehensive review of ion beam lithography in general and a practical guide to the individual IBL techniques developed to date. Special attention is given to applications in nanofabrication

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    Towards Data Reliable, Low-Power, and Repairable Resistive Random Access Memories

    Get PDF
    A series of breakthroughs in memristive devices have demonstrated the potential of memristor arrays to serve as next generation resistive random access memories (ReRAM), which are fast, low-power, ultra-dense, and non-volatile. However, memristors' unique device characteristics also make them prone to several sources of error. Owing to the stochastic filamentary nature of memristive devices, various recoverable errors can affect the data reliability of a ReRAM. Permanent device failures further limit the lifetime of a ReRAM. This dissertation developed low-power solutions for more reliable and longer-enduring ReRAM systems. In this thesis, we first look into a data reliability issue known as write disturbance. Writing into a memristor in a crossbar could disturb the stored values in other memristors that are on the same memory line as the target cell. Such disturbance is accumulative over time which may lead to complete data corruption. To address this problem, we propose the use of two regular memristors on each word to keep track of the disturbance accumulation and trigger a refresh to restore the weakened data, once it becomes necessary. We also investigate the considerable variation in the write-time characteristics of individual memristors. With such variation, conventional fixed-pulse write schemes not only waste significant energy, but also cannot guarantee reliable completion of the write operations. We address such variation by proposing an adaptive write scheme that adjusts the width of the write pulses for each memristor. Our scheme embeds an online monitor to detect the completion of a write operation and takes into account the parasitic effect of line-shared devices in access-transistor-free memristive arrays. We further investigate the use of this method to shorten the test time of memory march algorithms by eliminating the need of a verifying read right after a write, which is commonly employed in the test sequences of march algorithms.Finally, we propose a novel mechanism to extend the lifetime of a ReRAM by protecting it against hard errors through the exploitation of a unique feature of bipolar memristive devices. Our solution proposes an unorthodox use of complementary resistive switches (a particular implementation of memristive devices) to provide an ``in-place spare'' for each memory cell at negligible extra cost. The in-place spares are then utilized by a repair scheme to repair memristive devices that have failed at a stuck-at-ON state at a page-level granularity. Furthermore, we explore the use of in-place spares in lieu of other memory reliability and yield enhancement solutions, such as error correction codes (ECC) and spare rows. We demonstrate that with the in-place spares, we can yield the same lifetime as a baseline ReRAM with either significantly fewer spare rows or a lighter-weight ECC, both of which can save on energy consumption and area
    corecore