508 research outputs found

    Division-based versus general decomposition-based multiple-level logic synthesis

    Get PDF
    During the last decade, many different approaches have been proposed to solve the multiple-level synthesis problem with different minimum functionally complete systems of primitive logic blocks. The most popular of them is the division-based approach. However, modem microelectronic technology provides a large variety of building blocks which considerably differ from those typically considered. The traditional methods are therefore not suitable for synthesis with many modem building blocks. Furthermore, they often fail to find global optima for complex designs and leave unconsidered some important design aspects. Some of their weaknesses can be eliminated without leaving the paradigm they are based on, other ones are more fundamental. A paradigm which enables efficient exploitation of the opportunities created by the microelectronic technology is the general decomposition paradigm. The aim of this paper is to analyze and compare the general decomposition approach and the division-based approach. The most important advantages of the general decomposition approach are its generality (any network of any building blocks can be considered) and totality (all important design aspects can be considered) as well as handling the incompletely specified functions in a natural way. In many cases, the general decomposition approach gives much better results than the traditional approaches

    Transient error mitigation by means of approximate logic circuits

    Get PDF
    Mención Internacional en el título de doctorThe technological advances in the manufacturing of electronic circuits have allowed to greatly improve their performance, but they have also increased the sensitivity of electronic devices to radiation-induced errors. Among them, the most common effects are the SEEs, i.e., electrical perturbations provoked by the strike of high-energy particles, which may modify the internal state of a memory element (SEU) or generate erroneous transient pulses (SET), among other effects. These events pose a threat for the reliability of electronic circuits, and therefore fault-tolerance techniques must be applied to deal with them. The most common fault-tolerance techniques are based in full replication (DWC or TMR). These techniques are able to cover a wide range of failure mechanisms present in electronic circuits. However, they suffer from high overheads in terms of area and power consumption. For this reason, lighter alternatives are often sought at the expense of slightly reducing reliability for the least critical circuit sections. In this context a new paradigm of electronic design is emerging, known as approximate computing, which is based on improving the circuit performance in change of slight modifications of the intended functionality. This is an interesting approach for the design of lightweight fault-tolerant solutions, which has not been yet studied in depth. The main goal of this thesis consists in developing new lightweight fault-tolerant techniques with partial replication, by means of approximate logic circuits. These circuits can be designed with great flexibility. This way, the level of protection as well as the overheads can be adjusted at will depending on the necessities of each application. However, finding optimal approximate circuits for a given application is still a challenge. In this thesis a method for approximate circuit generation is proposed, denoted as fault approximation, which consists in assigning constant logic values to specific circuit lines. On the other hand, several criteria are developed to generate the most suitable approximate circuits for each application, by using this fault approximation mechanism. These criteria are based on the idea of approximating the least testable sections of circuits, which allows reducing overheads while minimising the loss of reliability. Therefore, in this thesis the selection of approximations is linked to testability measures. The first criterion for fault selection developed in this thesis uses static testability measures. The approximations are generated from the results of a fault simulation of the target circuit, and from a user-specified testability threshold. The amount of approximated faults depends on the chosen threshold, which allows to generate approximate circuits with different performances. Although this approach was initially intended for combinational circuits, an extension to sequential circuits has been performed as well, by considering the flip-flops as both inputs and outputs of the combinational part of the circuit. The experimental results show that this technique achieves a wide scalability, and an acceptable trade-off between reliability versus overheads. In addition, its computational complexity is very low. However, the selection criterion based in static testability measures has some drawbacks. Adjusting the performance of the generated approximate circuits by means of the approximation threshold is not intuitive, and the static testability measures do not take into account the changes as long as faults are approximated. Therefore, an alternative criterion is proposed, which is based on dynamic testability measures. With this criterion, the testability of each fault is computed by means of an implication-based probability analysis. The probabilities are updated with each new approximated fault, in such a way that on each iteration the most beneficial approximation is chosen, that is, the fault with the lowest probability. In addition, the computed probabilities allow to estimate the level of protection against faults that the generated approximate circuits provide. Therefore, it is possible to generate circuits which stick to a target error rate. By modifying this target, circuits with different performances can be obtained. The experimental results show that this new approach is able to stick to the target error rate with reasonably good precision. In addition, the approximate circuits generated with this technique show better performance than with the approach based in static testability measures. In addition, the fault implications have been reused too in order to implement a new type of logic transformation, which consists in substituting functionally similar nodes. Once the fault selection criteria have been developed, they are applied to different scenarios. First, an extension of the proposed techniques to FPGAs is performed, taking into account the particularities of this kind of circuits. This approach has been validated by means of radiation experiments, which show that a partial replication with approximate circuits can be even more robust than a full replication approach, because a smaller area reduces the probability of SEE occurrence. Besides, the proposed techniques have been applied to a real application circuit as well, in particular to the microprocessor ARM Cortex M0. A set of software benchmarks is used to generate the required testability measures. Finally, a comparative study of the proposed approaches with approximate circuit generation by means of evolutive techniques have been performed. These approaches make use of a high computational capacity to generate multiple circuits by trial-and-error, thus reducing the possibility of falling into local minima. The experimental results demonstrate that the circuits generated with evolutive approaches are slightly better in performance than the circuits generated with the techniques here proposed, although with a much higher computational effort. In summary, several original fault mitigation techniques with approximate logic circuits are proposed. These approaches are demonstrated in various scenarios, showing that the scalability and adaptability to the requirements of each application are their main virtuesLos avances tecnológicos en la fabricación de circuitos electrónicos han permitido mejorar en gran medida sus prestaciones, pero también han incrementado la sensibilidad de los mismos a los errores provocados por la radiación. Entre ellos, los más comunes son los SEEs, perturbaciones eléctricas causadas por el impacto de partículas de alta energía, que entre otros efectos pueden modificar el estado de los elementos de memoria (SEU) o generar pulsos transitorios de valor erróneo (SET). Estos eventos suponen un riesgo para la fiabilidad de los circuitos electrónicos, por lo que deben ser tratados mediante técnicas de tolerancia a fallos. Las técnicas de tolerancia a fallos más comunes se basan en la replicación completa del circuito (DWC o TMR). Estas técnicas son capaces de cubrir una amplia variedad de modos de fallo presentes en los circuitos electrónicos. Sin embargo, presentan un elevado sobrecoste en área y consumo. Por ello, a menudo se buscan alternativas más ligeras, aunque no tan efectivas, basadas en una replicación parcial. En este contexto surge una nueva filosofía de diseño electrónico, conocida como computación aproximada, basada en mejorar las prestaciones de un diseño a cambio de ligeras modificaciones de la funcionalidad prevista. Es un enfoque atractivo y poco explorado para el diseño de soluciones ligeras de tolerancia a fallos. El objetivo de esta tesis consiste en desarrollar nuevas técnicas ligeras de tolerancia a fallos por replicación parcial, mediante el uso de circuitos lógicos aproximados. Estos circuitos se pueden diseñar con una gran flexibilidad. De este forma, tanto el nivel de protección como el sobrecoste se pueden regular libremente en función de los requisitos de cada aplicación. Sin embargo, encontrar los circuitos aproximados óptimos para cada aplicación es actualmente un reto. En la presente tesis se propone un método para generar circuitos aproximados, denominado aproximación de fallos, consistente en asignar constantes lógicas a ciertas líneas del circuito. Por otro lado, se desarrollan varios criterios de selección para, mediante este mecanismo, generar los circuitos aproximados más adecuados para cada aplicación. Estos criterios se basan en la idea de aproximar las secciones menos testables del circuito, lo que permite reducir los sobrecostes minimizando la perdida de fiabilidad. Por tanto, en esta tesis la selección de aproximaciones se realiza a partir de medidas de testabilidad. El primer criterio de selección de fallos desarrollado en la presente tesis hace uso de medidas de testabilidad estáticas. Las aproximaciones se generan a partir de los resultados de una simulación de fallos del circuito objetivo, y de un umbral de testabilidad especificado por el usuario. La cantidad de fallos aproximados depende del umbral escogido, lo que permite generar circuitos aproximados con diferentes prestaciones. Aunque inicialmente este método ha sido concebido para circuitos combinacionales, también se ha realizado una extensión a circuitos secuenciales, considerando los biestables como entradas y salidas de la parte combinacional del circuito. Los resultados experimentales demuestran que esta técnica consigue una buena escalabilidad, y unas prestaciones de coste frente a fiabilidad aceptables. Además, tiene un coste computacional muy bajo. Sin embargo, el criterio de selección basado en medidas estáticas presenta algunos inconvenientes. No resulta intuitivo ajustar las prestaciones de los circuitos aproximados a partir de un umbral de testabilidad, y las medidas estáticas no tienen en cuenta los cambios producidos a medida que se van aproximando fallos. Por ello, se propone un criterio alternativo de selección de fallos, basado en medidas de testabilidad dinámicas. Con este criterio, la testabilidad de cada fallo se calcula mediante un análisis de probabilidades basado en implicaciones. Las probabilidades se actualizan con cada nuevo fallo aproximado, de forma que en cada iteración se elige la aproximación más favorable, es decir, el fallo con menor probabilidad. Además, las probabilidades calculadas permiten estimar la protección frente a fallos que ofrecen los circuitos aproximados generados, por lo que es posible generar circuitos que se ajusten a una tasa de fallos objetivo. Modificando esta tasa se obtienen circuitos aproximados con diferentes prestaciones. Los resultados experimentales muestran que este método es capaz de ajustarse razonablemente bien a la tasa de fallos objetivo. Además, los circuitos generados con esta técnica muestran mejores prestaciones que con el método basado en medidas estáticas. También se han aprovechado las implicaciones de fallos para implementar un nuevo tipo de transformación lógica, consistente en sustituir nodos funcionalmente similares. Una vez desarrollados los criterios de selección de fallos, se aplican a distintos campos. En primer lugar, se hace una extensión de las técnicas propuestas para FPGAs, teniendo en cuenta las particularidades de este tipo de circuitos. Esta técnica se ha validado mediante experimentos de radiación, los cuales demuestran que una replicación parcial con circuitos aproximados puede ser incluso más robusta que una replicación completa, ya que un área más pequeña reduce la probabilidad de SEEs. Por otro lado, también se han aplicado las técnicas propuestas en esta tesis a un circuito de aplicación real, el microprocesador ARM Cortex M0, utilizando un conjunto de benchmarks software para generar las medidas de testabilidad necesarias. Por ´último, se realiza un estudio comparativo de las técnicas desarrolladas con la generación de circuitos aproximados mediante técnicas evolutivas. Estas técnicas hacen uso de una gran capacidad de cálculo para generar múltiples circuitos mediante ensayo y error, reduciendo la posibilidad de caer en algún mínimo local. Los resultados confirman que, en efecto, los circuitos generados mediante técnicas evolutivas son ligeramente mejores en prestaciones que con las técnicas aquí propuestas, pero con un coste computacional mucho mayor. En definitiva, se proponen varias técnicas originales de mitigación de fallos mediante circuitos aproximados. Se demuestra que estas técnicas tienen diversas aplicaciones, haciendo de la flexibilidad y adaptabilidad a los requisitos de cada aplicación sus principales virtudes.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Raoul Velazco.- Secretario: Almudena Lindoso Muñoz.- Vocal: Jaume Segura Fuste

    Genetic Algorithms: An Overview with Applications in Evolvable Hardware

    Get PDF

    Increasing the efficacy of automated instruction set extension

    Get PDF
    The use of Instruction Set Extension (ISE) in customising embedded processors for a specific application has been studied extensively in recent years. The addition of a set of complex arithmetic instructions to a baseline core has proven to be a cost-effective means of meeting design performance requirements. This thesis proposes and evaluates a reconfigurable ISE implementation called “Configurable Flow Accelerators” (CFAs), a number of refinements to an existing Automated ISE (AISE) algorithm called “ISEGEN”, and the effects of source form on AISE. The CFA is demonstrated repeatedly to be a cost-effective design for ISE implementation. A temporal partitioning algorithm called “staggering” is proposed and demonstrated on average to reduce the area of CFA implementation by 37% for only an 8% reduction in acceleration. This thesis then turns to concerns within the ISEGEN AISE algorithm. A methodology for finding a good static heuristic weighting vector for ISEGEN is proposed and demonstrated. Up to 100% of merit is shown to be lost or gained through the choice of vector. ISEGEN early-termination is introduced and shown to improve the runtime of the algorithm by up to 7.26x, and 5.82x on average. An extension to the ISEGEN heuristic to account for pipelining is proposed and evaluated, increasing acceleration by up to an additional 1.5x. An energyaware heuristic is added to ISEGEN, which reduces the energy used by a CFA implementation of a set of ISEs by an average of 1.6x, up to 3.6x. This result directly contradicts the frequently espoused notion that “bigger is better” in ISE. The last stretch of work in this thesis is concerned with source-level transformation: the effect of changing the representation of the application on the quality of the combined hardwaresoftware solution. A methodology for combined exploration of source transformation and ISE is presented, and demonstrated to improve the acceleration of the result by an average of 35% versus ISE alone. Floating point is demonstrated to perform worse than fixed point, for all design concerns and applications studied here, regardless of ISEs employed

    Techniques for the realization of ultra- reliable spaceborne computer Final report

    Get PDF
    Bibliography and new techniques for use of error correction and redundancy to improve reliability of spaceborne computer

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    A Multi-layer Fpga Framework Supporting Autonomous Runtime Partial Reconfiguration

    Get PDF
    Partial reconfiguration is a unique capability provided by several Field Programmable Gate Array (FPGA) vendors recently, which involves altering part of the programmed design within an SRAM-based FPGA at run-time. In this dissertation, a Multilayer Runtime Reconfiguration Architecture (MRRA) is developed, evaluated, and refined for Autonomous Runtime Partial Reconfiguration of FPGA devices. Under the proposed MRRA paradigm, FPGA configurations can be manipulated at runtime using on-chip resources. Operations are partitioned into Logic, Translation, and Reconfiguration layers along with a standardized set of Application Programming Interfaces (APIs). At each level, resource details are encapsulated and managed for efficiency and portability during operation. An MRRA mapping theory is developed to link the general logic function and area allocation information to the device related physical configuration level data by using mathematical data structure and physical constraints. In certain scenarios, configuration bit stream data can be read and modified directly for fast operations, relying on the use of similar logic functions and common interconnection resources for communication. A corresponding logic control flow is also developed to make the entire process autonomous. Several prototype MRRA systems are developed on a Xilinx Virtex II Pro platform. The Virtex II Pro on-chip PowerPC core and block RAM are employed to manage control operations while multiple physical interfaces establish and supplement autonomous reconfiguration capabilities. Area, speed and power optimization techniques are developed based on the developed Xilinx prototype. Evaluations and analysis of these prototype and techniques are performed on a number of benchmark and hashing algorithm case studies. The results indicate that based on a variety of test benches, up to 70% reduction in the resource utilization, up to 50% improvement in power consumption, and up to 10 times increase in run-time performance are achieved using the developed architecture and approaches compared with Xilinx baseline reconfiguration flow. Finally, a Genetic Algorithm (GA) for a FPGA fault tolerance case study is evaluated as a ultimate high-level application running on this architecture. It demonstrated that this is a hardware and software infrastructure that enables an FPGA to dynamically reconfigure itself efficiently under the control of a soft microprocessor core that is instantiated within the FPGA fabric. Such a system contributes to the observed benefits of intelligent control, fast reconfiguration, and low overhead

    Design Automation and Application for Emerging Reconfigurable Nanotechnologies

    Get PDF
    In the last few decades, two major phenomena have revolutionized the electronic industry – the ever-increasing dependence on electronic circuits and the Complementary Metal Oxide Semiconductor (CMOS) downscaling. These two phenomena have been complementing each other in a way that while electronics, in general, have demanded more computations per functional unit, CMOS downscaling has aptly supported such needs. However, while the computational demand is still rising exponentially, CMOS downscaling is reaching its physical limits. Hence, the need to explore viable emerging nanotechnologies is more imperative than ever. This thesis focuses on streamlining the existing design automation techniques for a class of emerging reconfigurable nanotechnologies. Transistors based on this technology exhibit duality in conduction, i.e. they can be configured dynamically either as a p-type or an n-type device on the application of an external bias. Owing to this dynamic reconfiguration, these transistors are also referred to as Reconfigurable Field-Effect Transistors (RFETs). Exploring and developing new technologies just like CMOS, require tackling two main challenges – first, design automation flow has to be modified to enable tailor- made circuit designs. Second, possible application opportunities should be explored where such technologies can outsmart the existing CMOS technologies. This thesis targets the above two objectives for emerging reconfigurable nanotechnologies by proposing approaches for enabling an Electronic Design Automation (EDA) flow for circuits based on RFETs and exploring hardware security as an application that exploits the transistor-level dynamic reconfiguration offered by this technology. This thesis explains the bottom-up approach adopted to propose a logic synthesis flow by identifying new logic gates and circuit design paradigms that can particularly exploit the dynamic reconfiguration offered by these novel nanotechnologies. This led to the subsequent need of finding natural Boolean logic abstraction for emerging reconfigurable nanotechnologies as it is shown that the existing abstraction of negative unate logic for CMOS technologies is sub-optimal for RFETs-based circuits. In this direction, it has been shown that duality in Boolean logic is a natural abstraction for this technology and can truly represent the duality in conduction offered by individual transistors. Finding this abstraction paved the way for defining suitable primitives and proposing various algorithms for logic synthesis and technology mapping. The following step is to explore compatible physical synthesis flow for emerging reconfigurable nanotechnologies. Using silicon nanowire-based RFETs, .lef and .lib files have been provided which can provide an end-to-end flow to generate .GDSII file for circuits exclusively based on RFETs. Additionally, new approaches have been explored to improve placement and routing for circuits based on reconfigurable nanotechnologies. It has been demonstrated how these approaches led to superior results as compared to the native flow meant for CMOS. Lastly, the unique property of transistor-level reconfiguration offered by RFETs is utilized to implement efficient Intellectual Property (IP) protection schemes against adversarial attacks. The ability to control the conduction of individual transistors can be argued as one of the impactful features of this technology and suitably fits into the paradigm of security measures. Prior security schemes based on CMOS technology often come with large overheads in terms of area, power, and delay. In contrast, RFETs-based hardware security measures such as logic locking, split manufacturing, etc. proposed in this thesis, demonstrate affordable security solutions with low overheads. Overall, this thesis lays a strong foundation for the two main objectives – design automation, and hardware security as an application, to push emerging reconfigurable nanotechnologies for commercial integration. Additionally, contributions done in this thesis are made available under open-source licenses so as to foster new research directions and collaborations.:Abstract List of Figures List of Tables 1 Introduction 1.1 What are emerging reconfigurable nanotechnologies? 1.2 Why does this technology look so promising? 1.3 Electronics Design Automation 1.4 The game of see-saw: key challenges vs benefits for emerging reconfigurable nanotechnologies 1.4.1 Abstracting ambipolarity in logic gate designs 1.4.2 Enabling electronic design automation for RFETs 1.4.3 Enhanced functionality: a suitable fit for hardware security applications 1.5 Research questions 1.6 Entire RFET-centric EDA Flow 1.7 Key Contributions and Thesis Organization 2 Preliminaries 2.1 Reconfigurable Nanotechnology 2.1.1 1D devices 2.1.2 2D devices 2.1.3 Factors favoring circuit-flexibility 2.2 Feasibility aspects of RFET technology 2.3 Logic Synthesis Preliminaries 2.3.1 Circuit Model 2.3.2 Boolean Algebra 2.3.3 Monotone Function and the property of Unateness 2.3.4 Logic Representations 3 Exploring Circuit Design Topologies for RFETs 3.1 Contributions 3.2 Organization 3.3 Related Works 3.4 Exploring design topologies for combinational circuits: functionality-enhanced logic gates 3.4.1 List of Combinational Functionality-Enhanced Logic Gates based on RFETs 3.4.2 Estimation of gate delay using the logical effort theory 3.5 Invariable design of Inverters 3.6 Sequential Circuits 3.6.1 Dual edge-triggered TSPC-based D-flip flop 3.6.2 Exploiting RFET’s ambipolarity for metastability 3.7 Evaluations 3.7.1 Evaluation of combinational logic gates 3.7.2 Novel design of 1-bit ALU 3.7.3 Comparison of the sequential circuit with an equivalent CMOS-based design 3.8 Concluding remarks 4 Standard Cells and Technology Mapping 4.1 Contributions 4.2 Organization 4.3 Related Work 4.4 Standard cells based on RFETs 4.4.1 Interchangeable Pull-Up and Pull-Down Networks 4.4.2 Reconfigurable Truth-Table 4.5 Distilling standard cells 4.6 HOF-based Technology Mapping Flow for RFETs-based circuits 4.6.1 Area adjustments through inverter sharings 4.6.2 Technology Mapping Flow 4.6.3 Realizing Parameters For The Generic Library 4.6.4 Defining RFETs-based Genlib for HOF-based mapping 4.7 Experiments 4.7.1 Experiment 1: Distilling standard-cells from a benchmark suite 4.7.2 Experiment 2A: HOF-based mapping . 4.7.3 Experiment 2B: Using the distilled standard-cells during mapping 4.8 Concluding Remarks 5 Logic Synthesis with XOR-Majority Graphs 5.1 Contributions 5.2 Organization 5.3 Motivation 5.4 Background and Preliminaries 5.4.1 Terminologies 5.4.2 Self-duality in NPN classes 5.4.3 Majority logic synthesis 5.4.4 Earlier work on XMG 5.4.5 Classification of Boolean functions 5.5 Preserving Self-Duality 5.5.1 During logic synthesis 5.5.2 During versatile technology mapping 5.6 Advanced Logic synthesis techniques 5.6.1 XMG resubstitution 5.6.2 Exact XMG rewriting 5.7 Logic representation-agnostic Mapping 5.7.1 Versatile Mapper 5.7.2 Support of supergates 5.8 Creating Self-dual Benchmarks 5.9 Experiments 5.9.1 XMG-based Flow 5.9.2 Experimental Setup 5.9.3 Synthetic self-dual benchmarks 5.9.4 Cryptographic benchmark suite 5.10 Concluding remarks and future research directions 6 Physical synthesis flow and liberty generation 6.1 Contributions 6.2 Organization 6.3 Background and Related Work 6.3.1 Related Works 6.3.2 Motivation 6.4 Silicon Nanowire Reconfigurable Transistors 6.5 Layouts for Logic Gates 6.5.1 Layouts for Static Functional Logic Gates 6.5.2 Layout for Reconfigurable Logic Gate 6.6 Table Model for Silicon Nanowire RFETs 6.7 Exploring Approaches for Physical Synthesis 6.7.1 Using the Standard Place & Route Flow 6.7.2 Open-source Flow 6.7.3 Concept of Driver Cells 6.7.4 Native Approach 6.7.5 Island-based Approach 6.7.6 Utilization Factor 6.7.7 Placement of the Island on the Chip 6.8 Experiments 6.8.1 Preliminary comparison with CMOS technology 6.8.2 Evaluating different physical synthesis approaches 6.9 Results and discussions 6.9.1 Parameters Which Affect The Area 6.9.2 Use of Germanium Nanowires Channels 6.10 Concluding Remarks 7 Polymporphic Primitives for Hardware Security 7.1 Contributions 7.2 Organization 7.3 The Shift To Explore Emerging Technologies For Security 7.4 Background 7.4.1 IP protection schemes 7.4.2 Preliminaries 7.5 Security Promises 7.5.1 RFETs for logic locking (transistor-level locking) 7.5.2 RFETs for split manufacturing 7.6 Security Vulnerabilities 7.6.1 Realization of short-circuit and open-circuit scenarios in an RFET-based inverter 7.6.2 Circuit evaluation on sub-circuits 7.6.3 Reliability concerns: A consequence of short-circuit scenario 7.6.4 Implication of the proposed security vulnerability 7.7 Analytical Evaluation 7.7.1 Investigating the security promises 7.7.2 Investigating the security vulnerabilities 7.8 Concluding remarks and future research directions 8 Conclusion 8.1 Concluding Remarks 8.2 Directions for Future Work Appendices A Distilling standard-cells B RFETs-based Genlib C Layout Extraction File (.lef) for Silicon Nanowire-based RFET D Liberty (.lib) file for Silicon Nanowire-based RFET
    corecore