599 research outputs found

    Anti-Tamper Method for Field Programmable Gate Arrays Through Dynamic Reconfiguration and Decoy Circuits

    Get PDF
    As Field Programmable Gate Arrays (FPGAs) become more widely used, security concerns have been raised regarding FPGA use for cryptographic, sensitive, or proprietary data. Storing or implementing proprietary code and designs on FPGAs could result in the compromise of sensitive information if the FPGA device was physically relinquished or remotely accessible to adversaries seeking to obtain the information. Although multiple defensive measures have been implemented (and overcome), the possibility exists to create a secure design through the implementation of polymorphic Dynamically Reconfigurable FPGA (DRFPGA) circuits. Using polymorphic DRFPGAs removes the static attributes from their design; thus, substantially increasing the difficulty of successful adversarial reverse-engineering attacks. A variety of dynamically reconfigurable methodologies exist for implementation that challenge designers in the reconfigurable technology field. A Hardware Description Language (HDL) DRFPGA model is presented for use in security applications. The Very High Speed Integrated Circuit HDL (VHSIC) language was chosen to take advantage of its capabilities, which are well suited to the current research. Additionally, algorithms that explicitly support granular autonomous reconfiguration have been developed and implemented on the DRFPGA as a means of protecting its designs. Documented testing validates the reconfiguration results and compares power usage, timing, and area estimates from a conventional and DRFPGA model

    A Multi-layer Fpga Framework Supporting Autonomous Runtime Partial Reconfiguration

    Get PDF
    Partial reconfiguration is a unique capability provided by several Field Programmable Gate Array (FPGA) vendors recently, which involves altering part of the programmed design within an SRAM-based FPGA at run-time. In this dissertation, a Multilayer Runtime Reconfiguration Architecture (MRRA) is developed, evaluated, and refined for Autonomous Runtime Partial Reconfiguration of FPGA devices. Under the proposed MRRA paradigm, FPGA configurations can be manipulated at runtime using on-chip resources. Operations are partitioned into Logic, Translation, and Reconfiguration layers along with a standardized set of Application Programming Interfaces (APIs). At each level, resource details are encapsulated and managed for efficiency and portability during operation. An MRRA mapping theory is developed to link the general logic function and area allocation information to the device related physical configuration level data by using mathematical data structure and physical constraints. In certain scenarios, configuration bit stream data can be read and modified directly for fast operations, relying on the use of similar logic functions and common interconnection resources for communication. A corresponding logic control flow is also developed to make the entire process autonomous. Several prototype MRRA systems are developed on a Xilinx Virtex II Pro platform. The Virtex II Pro on-chip PowerPC core and block RAM are employed to manage control operations while multiple physical interfaces establish and supplement autonomous reconfiguration capabilities. Area, speed and power optimization techniques are developed based on the developed Xilinx prototype. Evaluations and analysis of these prototype and techniques are performed on a number of benchmark and hashing algorithm case studies. The results indicate that based on a variety of test benches, up to 70% reduction in the resource utilization, up to 50% improvement in power consumption, and up to 10 times increase in run-time performance are achieved using the developed architecture and approaches compared with Xilinx baseline reconfiguration flow. Finally, a Genetic Algorithm (GA) for a FPGA fault tolerance case study is evaluated as a ultimate high-level application running on this architecture. It demonstrated that this is a hardware and software infrastructure that enables an FPGA to dynamically reconfigure itself efficiently under the control of a soft microprocessor core that is instantiated within the FPGA fabric. Such a system contributes to the observed benefits of intelligent control, fast reconfiguration, and low overhead

    High level behavioural modelling of boundary scan architecture.

    Get PDF
    This project involves the development of a software tool which enables the integration of the IEEE 1149.1/JTAG Boundary Scan Test Architecture automatically into an ASIC (Application Specific Integrated Circuit) design. The tool requires the original design (the ASIC) to be described in VHDL-IEEE 1076 Hardware Description Language. The tool consists of the two major elements: i) A parsing and insertion algorithm developed and implemented in 'C'; ii) A high level model of the Boundary Scan Test Architecture implemented in 'VHDL'. The parsing and insertion algorithm is developed to deal with identifying the design Input/Output (I/O) terminals, their types and the order they appear in the ASIC design. It then attaches suitable Boundary Scan Cells to each I/O, except power and ground and inserts the high level models of the full Boundary Scan Architecture into the ASIC without altering the design core structure

    Access Time Minimization in IEEE 1687 Networks

    Get PDF
    IEEE 1687 enables flexible access to the embedded (on-chip) instruments that are needed for post-silicon validation, debugging, wafer sort, package test, burn-in, printed circuit board bring-up, printed circuit board assembly manufacturing test, power-on self-test, and in-field test. At any of these scenarios, the instruments are accessed differently, and at a given scenario the instruments are accessed differently over time. It means the IEEE 1687 network needs to be frequently reconfigured from accessing one set of instruments to accessing a different set of instruments. Due to the need of frequent reconfiguration of the IEEE 1687 network it is important to (1) minimize the run-time for the algorithm finding the new reconfiguration, and (2) generate scan vectors with minimized access time. In this paper we model the reconfiguration problem using Boolean Satisfiability Problem (SAT). Compared to previous works we show significant reduction in run-time and we ensure minimal access time for the generated scan vectors

    From FPGA to ASIC: A RISC-V processor experience

    Get PDF
    This work document a correct design flow using these tools in the Lagarto RISC- V Processor and the RTL design considerations that must be taken into account, to move from a design for FPGA to design for ASIC

    IEEE Standard 1500 Compliance Verification for Embedded Cores

    Get PDF
    Core-based design and reuse are the two key elements for an efficient system-on-chip (SoC) development. Unfortunately, they also introduce new challenges in SoC testing, such as core test reuse and the need of a common test infrastructure working with cores originating from different vendors. The IEEE 1500 Standard for Embedded Core Testing addresses these issues by proposing a flexible hardware test wrapper architecture for embedded cores, together with a core test language (CTL) used to describe the implemented wrapper functionalities. Several intellectual property providers have already announced IEEE Standard 1500 compliance in both existing and future design blocks. In this paper, we address the problem of guaranteeing the compliance of a wrapper architecture and its CTL description to the IEEE Standard 1500. This step is mandatory to fully trust the wrapper functionalities in applying the test sequences to the core. We present a systematic methodology to build a verification framework for IEEE Standard 1500 compliant cores, allowing core providers and/or integrators to verify the compliance of their products (sold or purchased) to the standar

    Infrastructures and Algorithms for Testable and Dependable Systems-on-a-Chip

    Get PDF
    Every new node of semiconductor technologies provides further miniaturization and higher performances, increasing the number of advanced functions that electronic products can offer. Silicon area is now so cheap that industries can integrate in a single chip usually referred to as System-on-Chip (SoC), all the components and functions that historically were placed on a hardware board. Although adding such advanced functionality can benefit users, the manufacturing process is becoming finer and denser, making chips more susceptible to defects. Today’s very deep-submicron semiconductor technologies (0.13 micron and below) have reached susceptibility levels that put conventional semiconductor manufacturing at an impasse. Being able to rapidly develop, manufacture, test, diagnose and verify such complex new chips and products is crucial for the continued success of our economy at-large. This trend is expected to continue at least for the next ten years making possible the design and production of 100 million transistor chips. To speed up the research, the National Technology Roadmap for Semiconductors identified in 1997 a number of major hurdles to be overcome. Some of these hurdles are related to test and dependability. Test is one of the most critical tasks in the semiconductor production process where Integrated Circuits (ICs) are tested several times starting from the wafer probing to the end of production test. Test is not only necessary to assure fault free devices but it also plays a key role in analyzing defects in the manufacturing process. This last point has high relevance since increasing time-to-market pressure on semiconductor fabrication often forces foundries to start volume production on a given semiconductor technology node before reaching the defect densities, and hence yield levels, traditionally obtained at that stage. The feedback derived from test is the only way to analyze and isolate many of the defects in today’s processes and to increase process’s yield. With the increasing need of high quality electronic products, at each new physical assembly level, such as board and system assembly, test is used for debugging, diagnosing and repairing the sub-assemblies in their new environment. Similarly, the increasing reliability, availability and serviceability requirements, lead the users of high-end products performing periodic tests in the field throughout the full life cycle. To allow advancements in each one of the above scaling trends, fundamental changes are expected to emerge in different Integrated Circuits (ICs) realization disciplines such as IC design, packaging and silicon process. These changes have a direct impact on test methods, tools and equipment. Conventional test equipment and methodologies will be inadequate to assure high quality levels. On chip specialized block dedicated to test, usually referred to as Infrastructure IP (Intellectual Property), need to be developed and included in the new complex designs to assure that new chips will be adequately tested, diagnosed, measured, debugged and even sometimes repaired. In this thesis, some of the scaling trends in designing new complex SoCs will be analyzed one at a time, observing their implications on test and identifying the key hurdles/challenges to be addressed. The goal of the remaining of the thesis is the presentation of possible solutions. It is not sufficient to address just one of the challenges; all must be met at the same time to fulfill the market requirements

    An architecture and technology for Ambient Intelligence Node

    Get PDF
    The era of separate networks is over. The existing technology leaders are preparing a big change in recreation of environment around us. There are several faces for this change. Names like Ambient Intelligence, Ambient Network, IP Multimedia Subsystem and others were created all over the Globe. Regardless of which name is used the new network will combine three main functional principles---it will be: contextual aware, ubiquitous access and intelligent interfaces unified network. Within this thesis two major aspects are defined. First, the definition of the Ambient Intelligence Environment concept is presented. Secondly the architecture vectors for the technology are named. A short overview of the existing technology is followed by details for the chosen technology---FPGA. The overall specifications are incorporated in the design and demonstration of a basic Ambient Intelligence Node created in the System on the Chip (SoC) FPGA technology

    PDNPulse: Sensing PCB Anomaly with the Intrinsic Power Delivery Network

    Full text link
    The ubiquitous presence of printed circuit boards (PCBs) in modern electronic systems and embedded devices makes their integrity a top security concern. To take advantage of the economies of scale, today's PCB design and manufacturing are often performed by suppliers around the globe, exposing them to many security vulnerabilities along the segmented PCB supply chain. Moreover, the increasing complexity of the PCB designs also leaves ample room for numerous sneaky board-level attacks to be implemented throughout each stage of a PCB's lifetime, threatening many electronic devices. In this paper, we propose PDNPulse, a power delivery network (PDN) based PCB anomaly detection framework that can identify a wide spectrum of board-level malicious modifications. PDNPulse leverages the fact that the PDN's characteristics are inevitably affected by modifications to the PCB, no matter how minuscule. By detecting changes to the PDN impedance profile and using the Frechet distance-based anomaly detection algorithms, PDNPulse can robustly and successfully discern malicious modifications across the system. Using PDNPulse, we conduct extensive experiments on seven commercial-off-the-shelf PCBs, covering different design scales, different threat models, and seven different anomaly types. The results confirm that PDNPulse creates an effective security asymmetry between attack and defense
    corecore