568 research outputs found

    Measuring CADeT Performance by Means of FITTest _BENCH06 Benchmark Circuits

    Get PDF
    Benchmark circuits provide a basis for both research institutions and industry to measure their methods and products against. This paper focuses on utilization of recently published FITTest _BENCH06 benchmarks for measuring quality of our novel academic design for testability tool called CADeT. The paper presents basic characteristics of benchmarks and CADeT tool, provides results and analysis of implementing individual testing techniques and their constraint-driven combination to particular benchmarks

    SCAN CHAIN BASED HARDWARE SECURITY

    Get PDF
    Hardware has become a popular target for attackers to hack into any computing and communication system. Starting from the legendary power analysis attacks discovered 20 years ago to the recent Intel Spectre and Meltdown attacks, security vulnerabilities in hardware design have been exploited for malicious purposes. With the emerging Internet of Things (IoT) applications, where the IoT devices are extremely resource constrained, many proven secure but computational expensive cryptography protocols cannot be applied on such devices. Thus there is an urgent need to understand the hardware vulnerabilities and develop cost effective mitigation methods. One established field in the semiconductor and integrated circuit (IC) industry, known as IC test, has the goal of ensuring that fabricated ICs are free of manufacturing defects and perform the required functionalities. Testing is essential to isolate faulty chips from good ones. The concept of design for test (DFT) has been integrated in the commercial IC design and fabrication process for several decades. Scan chain, which provides test engineer access to all the flip flops in the chip through the scan in (SI) and scan out (SO) ports, is the backbone of industrial testing methods and can be found in almost all the modern designs. In addition to IC testing, scan chain has found applications in intellectual property (IP) protection and IC identification. However, attackers can also leverage the controllability and observability of scan chain as a side channel to break systems such as cryptographic chips. This dissertation addresses these two important security problems by proposing (1) a practical scan chain based security primitive for IP protection and (2) a partial scan chain framework that can mitigate all the existing scan based attacks. First, we observe the fact that each D-flip-flop has two output ports, Q and Q’, designed to simplify the logic and has been used to reduce the power consumption for IC test. The availability of both Q and Q’ ports provide the opportunity for IP protection. More specifically, we can generate a digital fingerprint by selecting different connection styles between adjacent scan cells during the design of scan chain. This method has two major advantages: fingerprints are created as a post-silicon procedure and therefore there will be little fabrication overhead; altering the connection style requires the modification of test vectors for each fingerprinted IP and thus enables a non-intrusive fingerprint verification method. This addresses the overhead and detectability problems, two of the most challenging problems of designing practical IP fingerprinting techniques in the past two decades. Combined with the recently developed reconfigurable scan networks (RSNs) that are popular for embedded and IoT devices, we design an IC identification (ID) scheme utilizing the different connection styles. We perform experiments on standard benchmarks to demonstrate that our approach has low design overhead. We also conduct security analysis to show that such fingerprints and IC IDs are robust against various attacks. In the second part of this dissertation, we consider the scan chain side channel attack, which has been reported as one of the most severe side channel attacks to modern secure systems. We argue that the current countermeasures are restricted to the requirement of providing direct SI and SO for testing and thus suffers the vulnerability of leaving this side channel open to the attackers as well. Therefore, we propose a novel public-private partial scan chain based approach with the basic idea of removing the flip flops that store sensitive information from the scan chain. This will eliminate the scan chain side channel, but it also limits IC test. The key contribution in our proposed public-private partial scan chain design is that it can keep the full test coverage while providing security to the scan chain. This is achieved by chaining the removed flip flops into one or more private partial scan chains and adding protections to the SI and SO ports of such chains. Unlike the traditional partial scan design which not only fails to provide full fault coverage, but also incur huge overhead in test time and test vector generation time, we propose a set of techniques to ensure that the desired test vectors can be entered into the system efficiently. These techniques include test vector reordering, test vector reusing, and test vector generation based on a novel finite state machine (FSM) structure we have invented. On the other hand, to enable the test engineers the ability to observe the test output to diagnose the chip while not leaking information to the attackers, we propose two lightweight mechanisms, one based on linear feedback shift register (LFSR) and the other one based on configurable physical unclonable function (PUF). Finally, we discuss a protocol on how in-field test can be realized using our public-private partial scan chain. We conduct experiments with industrial scan design tools to demonstrate that the required hardware in our approach has negligible area overhead and gives full test coverage with reduced test time and does not need to re-generate test vectors. In sum, this dissertation focuses on the role of scan chain, a conventional design for test facility, in hardware security. We show that scan chain features can be leveraged to create practical IP protection techniques including IP watermarking and fingerprinting as well as IC identification and authentication. We also propose a novel public-private partial scan design principle to close the scan chain side channel to the attackers. Through this dissertation work, we demonstrate that it is possible to develop highly practical scan chain based techniques that can benefit both the community of IC test and hardware security

    Power constrained test scheduling in system-on-chip design

    Get PDF
    With the development of VLSI technologies, especially with the coming of deep sub-micron semiconductor process technologies, power dissipation becomes a critical factor that cannot be ignored either in normal operation or in test mode of digital systems. Test scheduling has to take into consideration of both test concurrency and power dissipation constraints. For satisfying high fault coverage goals with minimum test application time under certain power dissipation constraints, the testing of all components on the system should be performed in parallel as much as possible. The main objective of this thesis is to address the test-scheduling problem faced by SOC designers at system level. Through the analysis of several existing scheduling approaches, we enlarge the basis that current approaches based on to minimize test application time and propose an efficient and integrated technique for the test scheduling of SOCs under power-constraint. The proposed merging approach is based on a tree growing technique and can be used to overlay the block-test sessions in order to reduce further test application time. A number of experiments, based on academic benchmarks and industrial designs, have been carried out to demonstrate the usefulness and efficiency of the proposed approaches

    Block-level test scheduling under power dissipation constraints

    Get PDF
    As dcvicc technologies such as VLSI and Multichip Module (MCM) become mature, and larger and denser memory ICs arc implemented for high-performancc digital systems, power dissipation becomes a critical factor and can no longer be ignored cither in normal operation of the system or under test conditions. One of the major considerations in test scheduling is the fact that heat dissipated during test application is significantly higher than during normal operation (sometimes 100 - 200% higher). Therefore, this is one of the recent major considerations in test scheduling. Test scheduling is strongly related to test concurrency. Test concurrency is a design property which strongly impacts testability and power dissipation. To satisfy high fault coverage goals with reduced test application time under certain power dissipation constraints, the testing of all components on the system should be performed m parallel to the greatest extent possible. Some theoretical analysis of this problem has been carried out, but only at IC level. The problem was basically described as a compatible test clustering, where the compatibility among tests was given by test resource and power dissipation conflicts at the same time. From an implementation point of view this problem was identified as an Non-Polynomial (NP) complete problem In this thesis, an efficient scheme for overlaying the block-tcsts, called the extended tree growing technique, is proposed together with classical scheduling algorithms to search for power-constrained blocktest scheduling (PTS) profiles m a polynomial time Classical algorithms like listbased scheduling and distribution-graph based scheduling arc employed to tackle at high level the PTS problem. This approach exploits test parallelism under power constraints. This is achieved by overlaying the block-tcst intervals of compatible subcircuits to test as many of them as possible concurrently so that the maximum accumulated power dissipation is balanced and does not exceed the given limit. The test scheduling discipline assumed here is the partitioned testing with run to completion. A constant additive model is employed for power dissipation analysis and estimation throughout the algorithm

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    Characterization of osseointegrative phosphatidylserine and cholesterol orthopaedic implant coatings

    Get PDF
    2013 Spring.Includes bibliographical references.Total joint arthroplasties/replacements are one of the most successful surgeries available today for improving patients’ quality of life. By 2030 in the US, demand for primary total hip and knee arthroplasties are expected to grow by 174% and 673% respectively to a combined total of over 4 million procedures performed annually, driven largely by an ageing population and an increased occurrence of obesity. Current patient options for load-bearing bone integrating implants have significant shortcomings. Nearly a third of patients require a revision surgery before the implant is 15 years old, and those who have revision surgeries are at an increased risk of requiring additional reoperations. A recent implant technology that has shown to be effective at improving bone to implant integration is the use of phosphatidylserine (DOPS) coatings. These coatings are challenging to analyze and measure due to their highly dynamic, soft, rough, thick, and optically diffractive properties. Previous work had difficulty investigating pertinent parameters for these coating’s development due in large part to a lack of available analytical techniques and a dearth of understanding of the micro- and nano-structural configuration of the coatings. This work addresses the lack of techniques available for use with DOPS coatings through the development of original methods of measurement, including the use of scanning white light interferometry and nanoindentation. These techniques were then applied for the characterization of DOPS coatings and the study of effects from several factors: 1. the influence of adding calcium and cholesterol to the coatings, 2. the effect of composition and roughness on aqueous contact angles, and 3. the impact of ageing and storage environment on the coatings. This project lays a foundation for the continued development and improvement of DOPS coatings, which have the promise of significantly improving current patient options for bone integrating implants. Using these newly developed and highly repeatable quantitative analysis methods, this study sheds light on the microstructural configuration of the DOPS coatings and elucidates previously unexplained phenomena of the coatings. Cholesterol was found to supersaturate in the coatings at high concentration and phase separate into an anhydrous crystalline form, while lower concentrations were found to significantly harden the coatings. Morphological and microstructural changes were detected in the coatings over the course of as little as two weeks that were dependent on the storage environment. The results and understanding gained pave the path for focused future research effort. Additionally, the methods and techniques developed for the analysis of DOPS coatings have a broader application for the measurement and analysis of other problematic biological materials and surfaces

    Sensors Fault Diagnosis Trends and Applications

    Get PDF
    Fault diagnosis has always been a concern for industry. In general, diagnosis in complex systems requires the acquisition of information from sensors and the processing and extracting of required features for the classification or identification of faults. Therefore, fault diagnosis of sensors is clearly important as faulty information from a sensor may lead to misleading conclusions about the whole system. As engineering systems grow in size and complexity, it becomes more and more important to diagnose faulty behavior before it can lead to total failure. In the light of above issues, this book is dedicated to trends and applications in modern-sensor fault diagnosis

    A Formal Model of Ambiguity and its Applications in Machine Translation

    Get PDF
    Systems that process natural language must cope with and resolve ambiguity. In this dissertation, a model of language processing is advocated in which multiple inputs and multiple analyses of inputs are considered concurrently and a single analysis is only a last resort. Compared to conventional models, this approach can be understood as replacing single-element inputs and outputs with weighted sets of inputs and outputs. Although processing components must deal with sets (rather than individual elements), constraints are imposed on the elements of these sets, and the representations from existing models may be reused. However, to deal efficiently with large (or infinite) sets, compact representations of sets that share structure between elements, such as weighted finite-state transducers and synchronous context-free grammars, are necessary. These representations and algorithms for manipulating them are discussed in depth in depth. To establish the effectiveness and tractability of the proposed processing model, it is applied to several problems in machine translation. Starting with spoken language translation, it is shown that translating a set of transcription hypotheses yields better translations compared to a baseline in which a single (1-best) transcription hypothesis is selected and then translated, independent of the translation model formalism used. More subtle forms of ambiguity that arise even in text-only translation (such as decisions conventionally made during system development about how to preprocess text) are then discussed, and it is shown that the ambiguity-preserving paradigm can be employed in these cases as well, again leading to improved translation quality. A model for supervised learning that learns from training data where sets (rather than single elements) of correct labels are provided for each training instance and use it to learn a model of compound word segmentation is also introduced, which is used as a preprocessing step in machine translation
    corecore