55 research outputs found

    D2.1 - Report on Selected TRNG and PUF Principles

    Get PDF
    This report represents the final version of Deliverable 2.1 of the HECTOR work package WP2. It is a result of discussions and work on Task 2.1 of all HECTOR partners involved in WP2. The aim of the Deliverable 2.1 is to select principles of random number generators (RNGs) and physical unclonable functions (PUFs) that fulfill strict technology, design and security criteria. For example, the selected RNGs must be suitable for implementation in logic devices according to the German AIS20/31 standard. Correspondingly, the selected PUFs must be suitable for applying similar security approach. A standard PUF evaluation approach does not exist, yet, but it should be proposed in the framework of the project. Selected RNGs and PUFs should be then thoroughly evaluated from the point of view of security and the most suitable principles should be implemented in logic devices, such as Field Programmable Logic Arrays (FPGAs) and Application Specific Integrated Circuits (ASICs) during the next phases of the project

    Physical Unclonable Function Reliability on Reconfigurable Hardware and Reliability Degradation with Temperature and Supply Voltage Variations

    Get PDF
    A hardware security solution using a Physical Unclonable Function (PUF) is a promising approach to ensure security for physical systems. PUF utilizes the inherent instance-specific parameters of physical objects and it is evaluated based on the performance parameters such as uniqueness, reliability, randomness, and tamper evidence of the Challenge and Response Pairs (CRPs). These performance parameters are affected by operating conditions such as temperature and supply voltage variations. In addition, PUF implementation on Field Programmable Gate Array (FPGA) platform is proven to be more complicated than PUF implementation on Application-Specific Integrated Circuit (ASIC) technologies. The automatic placement and routing of logic cells in FPGA can affect the performance of PUFs due to path delay imbalance. In this work, the impact of power supply and temperature variations, on the reliability of an arbiter PUF is studied. Simulation results are conducted to determine the effects of these varying conditions on the CRPs. Simulation results show that ± 10% of power supply variation can affect the reliability of an arbiter PUF by about 51%, similarly temperature fluctuation between -40 0C and +60 0C reduces the PUF reliability by 58%. In addition, a new methodology to implement a reliable arbiter PUF on an FPGA platform is presented. Instead of using an extra delay measurement module, the Chip Planner tool for FPGA is used for manually placement to minimize the path delay misalignment to less than 8 ps

    Improved Reliability of FPGA-based PUF Identification Generator Design

    Get PDF

    Within-Die Delay Variation Measurement And Analysis For Emerging Technologies Using An Embedded Test Structure

    Get PDF
    Both random and systematic within-die process variations (PV) are growing more severe with shrinking geometries and increasing die size. Escalation in the variations in delay and power with reductions in feature size places higher demands on the accuracy of variation models. Their availability can be used to improve yield, and the corresponding profitability and product quality of the fabricated integrated circuits (ICs). Sources of within-die variations include optical source limitations, and layout-based systematic effects (pitch, line-width variability, and microscopic etch loading). Unfortunately, accurate models of within-die PVs are becoming more difficult to derive because of their increasingly sensitivity to design-context. Embedded test structures (ETS) continue to play an important role in the development of models of PVs and as a mechanism to improve correlations between hardware and models. Variations in path delays are increasing with scaling, and are increasingly affected by neighborhood\u27 interactions. In order to fully characterize within-die variations, delays must be measured in the context of actual core-logic macros. Doing so requires the use of an embedded test structure, as opposed to traditional scribe line test structures such as ring oscillators (RO). Accurate measurements of within-die variations can be used, e.g., to better tune models to actual hardware (model-to-hardware correlations). In this research project, I propose an embedded test structure called REBEL (Regional dELay BEhavior) that is designed to measure path delays in a minimally invasive fashion; and its architecture measures the path delays more accurately. Design for manufacture-ability (DFM) analysis is done on the on 90 nm ASIC chips and 28nm Zynq 7000 series FPGA boards. I present ASIC results on within-die path delay variations in a floating-point unit (FPU) fabricated in IBM\u27s 90 nm technology, with 5 pipeline stages, used as a test vehicle in chip experiments carried out at nine different temperature/voltage (TV) corners. Also experimental data has been analyzed for path delay variations in short vs long paths. FPGA results on within-die variation and die-to-die variations on Advanced Encryption System (AES) using single pipelined stage are also presented. Other analysis that have been performed on the calibrated path delays are Flip Flop propagation delays for both rising and falling edge (tpHL and tpLH), uncertainty analysis, path distribution analysis, short versus long path variations and mid-length path within-die variation. I also analyze the impact on delay when the chips are subjected to industrial-level temperature and voltage variations. From the experimental results, it has been established that the proposed REBEL provides capabilities similar to an off-chip logic analyzer, i.e., it is able to capture the temporal behavior of the signal over time, including any static and dynamic hazards that may occur on the tested path. The ASIC results further show that path delays are correlated to the launch-capture (LC) interval used to time them. Therefore, calibration as proposed in this work must be carried out in order to obtain an accurate analysis of within-die variations. Results on ASIC chips show that short paths can vary up to 35% on average, while long paths vary up to 20% at nominal temperature and voltage. A similar trend occurs for within-die variations of mid-length paths where magnitudes reduced to 20% and 5%, respectively. The magnitude of delay variations in both these analyses increase as temperature and voltage are changed to increase performance. The high level of within-die delay variations are undesirable from a design perspective, but they represent a rich source of entropy for applications that make use of \u27secrets\u27 such as authentication, hardware metering and encryption. Physical unclonable functions (PUFs) are a class of primitives that leverage within-die-variations as a means of generating random bit strings for these types of applications, including hardware security and trust. Zynq FPGAs Die-to-Die and within-die variation study shows that on average there is 5% of within-Die variation and the range of die-to-Die variation can go upto 3ns. The die-to-Die variations can be explored in much further detail to study the variations spatial dependance. Additionally, I also carried out research in the area data mining to cater for big data by focusing the work on decision tree classification (DTC) to speed-up the classification step in hardware implementation. For this purpose, I devised a pipelined architecture for the implementation of axis parallel binary decision tree classification for meeting up with the requirements of execution time and minimal resource usage in terms of area. The motivation for this work is that analyzing larger data-sets have created abundant opportunities for algorithmic and architectural developments, and data-mining innovations, thus creating a great demand for faster execution of these algorithms, leading towards improving execution time and resource utilization. Decision trees (DT) have since been implemented in software programs. Though, the software implementation of DTC is highly accurate, the execution times and the resource utilization still require improvement to meet the computational demands in the ever growing industry. On the other hand, hardware implementation of DT has not been thoroughly investigated or reported in detail. Therefore, I propose a hardware acceleration of pipelined architecture that incorporates the parallel approach in acquiring the data by having parallel engines working on different partitions of data independently. Also, each engine is processing the data in a pipelined fashion to utilize the resources more efficiently and reduce the time for processing all the data records/tuples. Experimental results show that our proposed hardware acceleration of classification algorithms has increased throughput, by reducing the number of clock cycles required to process the data and generate the results, and it requires minimal resources hence it is area efficient. This architecture also enables algorithms to scale with increasingly large and complex data sets. We developed the DTC algorithm in detail and explored techniques for adapting it to a hardware implementation successfully. This system is 3.5 times faster than the existing hardware implementation of classification.\u2

    Web3.0 Security: Privacy Enhancing and Anonym Auditing in Blockchain-based Structures

    Full text link
    The advent of Web 3.0, underpinned by blockchain technologies, promises to transform the internet's landscape by empowering individuals with decentralized control over their data. However, this evolution brings unique security challenges that need to be addressed. This paper explores these complexities, focusing on enhancing privacy and anonymous auditing within blockchain structures. We present the architecture of Web 3.0 based on the blockchain, providing a clear perspective on its workflow and security mechanisms. A security protocol for Web 3.0 systems, employing privacy-preserving techniques and anonymous auditing during runtime, is proposed. Key components of our solution include the integration of privacy-enhancing techniques and the utilization of Tor for anonymous auditing. We discuss related work and propose a framework that meets these new security requirements. Lastly, we offer an evaluation and comparison of our model to existing methods. This research contributes towards the foundational understanding of Web 3.0's secure structure and offers a pathway towards secure and privacy-preserving digital interactions in this novel internet landscape

    Security and privacy issues of physical objects in the IoT: Challenges and opportunities

    Get PDF
    In the Internet of Things (IoT), security and privacy issues of physical objects are crucial to the related applications. In order to clarify the complicated security and privacy issues, the life cycle of a physical object is divided into three stages of pre-working, in-working, and post-working. On this basis, a physical object-based security architecture for the IoT is put forward. According to the security architecture, security and privacy requirements and related protecting technologies for physical objects in different working stages are analyzed in detail. Considering the development of IoT technologies, potential security and privacy challenges that IoT objects may face in the pervasive computing environment are summarized. At the same time, possible directions for dealing with these challenges are also pointed out

    FROPUF: How to Extract More Entropy from Two Ring Oscillators in FPGA-Based PUFs

    Get PDF
    Ring oscillator (RO) based physically unclonable function (PUF) on FPGAs is crucial and popular for its nice properties and easy implementation. The compensated measurement based on the ratio of two ring oscillators’ frequencies proves to be particularly effective to extract entropy of process variations. However from two ring oscillators only one bit entropy is extracted and RO PUFs will occupy numerous resource with the size of private information increasing. Motivated by this inefficient resource usage, we propose an elegant and efficient method to extract at least 31 bits entropy from two ring oscillators on FPGAs by utilizing the fine control of programmable delay lines (PDL). We call this construction Further ROPUF (FROPUF). In this paper, we present in detail how to take advantage of the underlying random process variation which derives from the lookup tables (LUT) of two ring oscillators, and show that the in-depth variation can be extracted by a similar second order difference calculation. In addition, we reveal the consistency of the evaluation results from Xilinx FPGAs (e.g. Virtex-5, Virtex-6, Kintex-7) and those by simulation of FROPUF. The responses of our new construction have a nominal bit-error-rate (BER) of 1.85% at 27 ◦ C and FROPUF greatly promotes the number of entropy with equivalent reliability of the general ROPUF

    Techniques for design and implementation of physically unclonable functions

    Get PDF
    Physically unclonable functions (PUFs) provide a basis for many security, and digital rights management protocols. PUFs exploit the unclonable and unique manufacturing variability of silicon devices to establish a secret. However, as we will demonstrate in this work, the classic delay-based PUF structures have a number of drawbacks including susceptibility to prediction, reverse engineering, man-in-the-middle and emulation attacks, as well as sensitivity to operational and environmental variations. To address these limitations, we have developed a new set of techniques for design and implementation of PUF. We design a secure PUF architecture and show how to predict response errors as well as to compress the challenge/responses in database. We further demonstrate applications where PUFs on reconfigurable FPGA platforms can be exploited for privacy protection. The effectiveness of the proposed techniques is validated using extensive implementations, simulations, and statistical analysis

    Analysis of Microcontroller Embedded SRAMs for Applications in Physical Unclonable Functions

    Get PDF
    The growth of the Internet of Things (IoT) market has motivated widespread proliferation of microcontroller- (MCU) based embedded systems. Suitable due to their abundance, low cost, low power consumption and small footprint. The memory architecture typically consists of volatile memory such as block(s) of SRAM, and non-volatile memory (NVM) for code storage. Authentication and encryption safeguard these endpoints within an IoT framework, which requires storage of a secure key. Keys stored within integrated circuits (ICs) are susceptible to attack via reverse engineering of the NVM. Newer approaches use Physical Unclonable Functions (PUFs), which produce unique identi ers that takes advantage of device-level randomness induced by manufacturing process variation in silicon. The unclonable property of PUFs is demonstrated with an analytical model. The unpredictable yet repeatable start-up values (SUVs) of SRAM bit-cells form the basis of an SRAM PUF. Performance measures, such as reliability, randomness, symmetry, and stability, dictate the quality of a PUF. Two commercial o -the-shelf (COTS) ARM-Cortex based MCU products, the STM32F429ZIT6U and ATSAMR21G18A, underwent automated and manual power cycling experiments that examined their embedded SRAM SUVs. The characterization framework provided acquires data via debug software and a developed C program, power cycling using a USB controlled relay and post-processing using Python. Applications of PUFs include cryptographic key generation, device identi cation and true random number hardware generation. Statistical results and a comparative analysis are presented. Amongst the total bitcell count of the embedded SRAM in STM and ATSAM MCUs, 36:86% and 28:86% are classi ed as non- or partially-skewed, respectively across N = 10; 000 samples. The Atmel MCU outperforms the STM MCU in reliability by 1.42 %, randomness by 0.65 % and stability by 8.00 %, with a 4.74 % SUV bias towards a logic '1'. Max errors per 128-bit data item is 22 and 38 bits for MCU #1 and MCU #2, respectively. The STM MCU exhibits column-wise correlation illustrated in a heatmap, where the Atmel MCU shows a random signature. The embedded SRAM in the Atmel MCU outperforms the STM MCU's and is thereby considered the more suitable PUF
    • …
    corecore