57 research outputs found

    HAL — The Missing Piece of the Puzzle for Hardware Reverse Engineering, Trojan Detection and Insertion

    Get PDF
    Hardware manipulations pose a serious threat to numerous systems, ranging from a myriad of smart-X devices to military systems. In many attack scenarios an adversary merely has access to the low-level, potentially obfuscated gate-level netlist. In general, the attacker possesses minimal information and faces the costly and time-consuming task of reverse engineering the design to identify security-critical circuitry, followed by the insertion of a meaningful hardware Trojan. These challenges have been considered only in passing by the research community. The contribution of this work is threefold: First, we present HAL, a comprehensive reverse engineering and manipulation framework for gate-level netlists. HAL allows automating defensive design analysis (e.g., including arbitrary Trojan detection algorithms with minimal effort) as well as offensive reverse engineering and targeted logic insertion. Second, we present a novel static analysis Trojan detection technique ANGEL which considerably reduces the false-positive detection rate of the detection technique FANCI. Furthermore, we demonstrate that ANGEL is capable of automatically detecting Trojans obfuscated with DeTrust. Third, we demonstrate how a malicious party can semi-automatically inject hardware Trojans into third-party designs. We present reverse engineering algorithms to disarm and trick cryptographic self-tests, and subtly leak cryptographic keys without any a priori knowledge of the design’s internal workings

    Amplificador CMOS de baixo ruído para imagiologia médica

    Get PDF
    Mestrado em Engenharia Electrónica e TelecomunicaçõesA presente dissertação aborda o projecto de um frontend analógico integrado para sincronização e amplificação de sinais produzidos por um fotomultiplicador de silício. A solução proposta pretende possibilitar medidas de tempo com resoluções na ordem dos picosegundos, para implementação em equipamentos compactos dedicados à Tomografia por Emissão de Positrões, com capacidade para medida do tempo de voo de fotões (TOFPET). O canal de frontend completo foi implementado em tecnologia CMOS 130nm, e compreende blocos de préamplificação, integração de carga, equilíbrio dinâmico do ponto de operação, bem como circuitos geradores de correntes de referência, para uma área total em silício de 500x90 μm. A discussão de resultados é baseada em simulações póslayout, e as linhas de investigação futuras são propostas.An analogue CMOS frontend for triggering and amplification of signals produced by a silicon photomultiplier (SiPM) is proposed. The solution intends to achieve picosecond resolution timing measurements for compact timeofflight Positron Emission Tomography (TOFPET) medical imaging equipments. A 130nm technology was used to implement such frontend, and the design includes preamplification, shaping, baseline holder and biasing circuitry, for a total silicon area of 500x90 μm. Postlayout simulation results are discussed, and ways to optimize the design are proposed

    Multi-objective Digital VLSI Design Optimisation

    Get PDF
    Modern VLSI design's complexity and density has been exponentially increasing over the past 50 years and recently reached a stage within its development, allowing heterogeneous, many-core systems and numerous functions to be integrated into a tiny silicon die. These advancements have revealed intrinsic physical limits of process technologies in advanced silicon technology nodes. Designers and EDA vendors have to handle these challenges which may otherwise result in inferior design quality, even failures, and lower design yields under time-to-market pressure. Multiple or many design objectives and constraints are emerging during the design process and often need to be dealt with simultaneously. Multi-objective evolutionary algorithms show flexible capabilities in maintaining multiple variable components and factors in uncertain environments. The VLSI design process involves a large number of available parameters both from designs and EDA tools. This provides many potential optimisation avenues where evolutionary algorithms can excel. This PhD work investigates the application of evolutionary techniques for digital VLSI design optimisation. Automated multi-objective optimisation frameworks, compatible with industrial design flows and foundry technologies, are proposed to improve solution performance, expand feasible design space, and handle complex physical floorplan constraints through tuning designs at gate-level. Methodologies for enriching standard cell libraries regarding drive strength are also introduced to cooperate with multi-objective optimisation frameworks, e.g., subsequent hill-climbing, providing a richer pool of solutions optimised for different trade-offs. The experiments of this thesis demonstrate that multi-objective evolutionary algorithms, derived from biological inspirations, can assist the digital VLSI design process, in an industrial design context, to more efficiently search for well-balanced trade-off solutions as well as optimised design space coverage. The expanded drive granularity of standard cells can push the performance of silicon technologies with offering improved solutions regarding critical objectives. The achieved optimisation results can better deliver trade-off solutions regarding power, performance and area metrics than using standard EDA tools alone. This has been not only shown for a single circuit solution but also covered the entire standard-tool-produced design space

    Hardware Attacks and Mitigation Techniques

    Get PDF
    Today, electronic devices have been widely deployed in our daily lives, basic infrastructure such as financial and communication systems, and military systems. Over the past decade, there have been a growing number of threats against them, posing great danger on these systems. Hardware-based countermeasures offer a low-performance overhead for building secure systems. In this work, we investigate what hardware-based attacks are possible against modern computers and electronic devices. We then explore several design and verification techniques to enhance hardware security with primary focus on two areas: hardware Trojans and side-channel attacks. Hardware Trojans are malicious modifications to the original integrated circuits (ICs). Due to the trend of outsourcing designs to foundries overseas, the threat of hardware Trojans is increasing. Researchers have proposed numerous detection methods, which either take place at test-time or monitor the IC for unexpected behavior at run-time. Most of these methods require the possession of a Trojan-free IC, which is hard to obtain. In this work, we propose an innovative way to detect Trojans using reverse-engineering. Our method eliminates the need for a Trojan-free IC. In addition, it avoids the costly and error-prone steps in the reverse-engineering process and achieves significantly good detection accuracy. We also notice that in the current literature, very little effort has been made to design-time strategies that help to make test-time or run-time detection of Trojans easier. To address this issue, we develop techniques that can improve the sensitivity of designs to test-time detection approaches. Experiments show that using our method, we could detect a lot more Trojans with very small power/area overhead and no timing violations. Side-channel attack (SCA) is another form of hardware attack in which the adversary measures some side-channel information such as power, temperature, timing, etc. and deduces some critical information about the underlying system. We first investigate countermeasures for timing SCAs on cache. These attacks have been demonstrated to be able to successfully break many widely-used modern ciphers. Existing hardware countermeasures usually have heavy performance overhead. We innovatively apply 3D integration techniques to solve the problem. We investigate the implication of 3D integration on timing SCAs on cache and propose several countermeasures that utilize 3D integration techniques. Experimental results show that our countermeasures increase system security significantly while still achieving some performance gain over a 2D baseline system. We also investigate the security of Oblivious RAM (ORAM), which is a newly proposed hardware primitive to hide memory access patterns. We demonstrate both through simulations and on FPGA board that timing SCAs can break many ORAM protocols. Some general guidelines in secure ORAM implementations are also provided. We hope that our findings will motivate a new line of research in making ORAMs more secure

    Proceedings of the 5th International Workshop on Reconfigurable Communication-centric Systems on Chip 2010 - ReCoSoC\u2710 - May 17-19, 2010 Karlsruhe, Germany. (KIT Scientific Reports ; 7551)

    Get PDF
    ReCoSoC is intended to be a periodic annual meeting to expose and discuss gathered expertise as well as state of the art research around SoC related topics through plenary invited papers and posters. The workshop aims to provide a prospective view of tomorrow\u27s challenges in the multibillion transistor era, taking into account the emerging techniques and architectures exploring the synergy between flexible on-chip communication and system reconfigurability

    Crosstalk computing: circuit techniques, implementation and potential applications

    Get PDF
    Title from PDF of title [age viewed January 32, 2022Dissertation advisor: Mostafizur RahmanVitaIncludes bibliographical references (page 117-136)Thesis (Ph.D.)--School of Computing and Engineering. University of Missouri--Kansas City, 2020This work presents a radically new computing concept for digital Integrated Circuits (ICs), called Crosstalk Computing. The conventional CMOS scaling trend is facing device scaling limitations and interconnect bottleneck. The other primary concern of miniaturization of ICs is the signal-integrity issue due to Crosstalk, which is the unwanted interference of signals between neighboring metal lines. The Crosstalk is becoming inexorable with advancing technology nodes. Traditional computing circuits always tries to reduce this Crosstalk by applying various circuit and layout techniques. In contrast, this research develops novel circuit techniques that can leverage this detrimental effect and convert it astutely to a useful feature. The Crosstalk is engineered into a logic computation principle by leveraging deterministic signal interference for innovative circuit implementation. This research work presents a comprehensive circuit framework for Crosstalk Computing and derives all the key circuit elements that can enable this computing model. Along with regular digital logic circuits, it also presents a novel Polymorphic circuit approach unique to Crosstalk Computing. In Polymorphic circuits, the functionality of a circuit can be altered using a control variable. Owing to the multi-functional embodiment in polymorphic-circuits, they find many useful applications such as reconfigurable system design, resource sharing, hardware security, and fault-tolerant circuit design, etc. This dissertation shows a comprehensive list of polymorphic logic gate implementations, which were not reported previously in any other work. It also performs a comparison study between Crosstalk polymorphic circuits and existing polymorphic approaches, which are either inefficient due to custom non-linear circuit styles or propose exotic devices. The ability to design a wide range of polymorphic logic circuits (basic and complex logics) compact in design and minimal in transistor count is unique to Crosstalk Computing, which leads to benefits in the circuit density, power, and performance. The circuit simulation and characterization results show a 6x improvement in transistor count, 2x improvement in switching energy, and 1.5x improvement in performance compared to counterpart implementation in CMOS circuit style. Nevertheless, the Crosstalk circuits also face issues while cascading the circuits; this research analyzes all the problems and develops auxiliary circuit techniques to fix the problems. Moreover, it shows a module-level cascaded polymorphic circuit example, which also employs the auxiliary circuit techniques developed. For the very first time, it implements a proof-of-concept prototype Chip for Crosstalk Computing at TSMC 65nm technology and demonstrates experimental evidence for runtime reconfiguration of the polymorphic circuit. The dissertation also explores the application potentials for Crosstalk Computing circuits. Finally, the future work section discusses the Electronic Design Automation (EDA) challenges and proposes an appropriate design flow; besides, it also discusses ideas for the efficient implementation of Crosstalk Computing structures. Thus, further research and development to realize efficient Crosstalk Computing structures can leverage the comprehensive circuit framework developed in this research and offer transformative benefits for the semiconductor industry.Introduction and Motivation -- More Moore and Relevant Beyond CMOS Research Directions -- Crosstalk Computing -- Crosstalk Circuits Based on Perception Model -- Crosstalk Circuit Types -- Cascading Circuit Issues and Sollutions -- Existing Polymorphic Circuit Approaches -- Crosstalk Polymorphic Circuits -- Comparison and Benchmarking of Crosstalk Gates -- Practical Realization of Crosstalk Gates -- Poential Applications -- Conclusion and Future Wor

    Modeling and Analysis of Large-Scale On-Chip Interconnects

    Get PDF
    As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted by the increasingly high dimensional process-voltage-temperature (PVT) variations demand much more modeling and analysis efforts than ever before, while the analysis of large scale on-chip interconnects that requires solving tens of millions of unknowns imposes great challenges in computer aided design areas. This dissertation presents new methodologies for addressing the above two important challenging issues for large scale on-chip interconnect modeling and analysis: In the past, the standard statistical circuit modeling techniques usually employ principal component analysis (PCA) and its variants to reduce the parameter dimensionality. Although widely adopted, these techniques can be very limited since parameter dimension reduction is achieved by merely considering the statistical distributions of the controlling parameters but neglecting the important correspondence between these parameters and the circuit performances (responses) under modeling. This dissertation presents a variety of performance-oriented parameter dimension reduction methods that can lead to more than one order of magnitude parameter reduction for a variety of VLSI circuit modeling and analysis problems. The sheer size of present day power/ground distribution networks makes their analysis and verification tasks extremely runtime and memory inefficient, and at the same time, limits the extent to which these networks can be optimized. Given today?s commodity graphics processing units (GPUs) that can deliver more than 500 GFlops (Flops: floating point operations per second). computing power and 100GB/s memory bandwidth, which are more than 10X greater than offered by modern day general-purpose quad-core microprocessors, it is very desirable to convert the impressive GPU computing power to usable design automation tools for VLSI verification. In this dissertation, for the first time, we show how to exploit recent massively parallel single-instruction multiple-thread (SIMT) based graphics processing unit (GPU) platforms to tackle power grid analysis with very promising performance. Our GPU based network analyzer is capable of solving tens of millions of power grid nodes in just a few seconds. Additionally, with the above GPU based simulation framework, more challenging three-dimensional full-chip thermal analysis can be solved in a much more efficient way than ever before

    Learning Approaches to Analog and Mixed Signal Verification and Analysis

    Get PDF
    The increased integration and interaction of analog and digital components within a system has amplified the need for a fast, automated, combined analog, and digital verification methodology. There are many automated characterization, test, and verification methods used in practice for digital circuits, but analog and mixed signal circuits suffer from long simulation times brought on by transistor-level analysis. Due to the substantial amount of simulations required to properly characterize and verify an analog circuit, many undetected issues manifest themselves in the manufactured chips. Creating behavioral models, a circuit abstraction of analog components assists in reducing simulation time which allows for faster exploration of the design space. Traditionally, creating behavioral models for non-linear circuits is a manual process which relies heavily on design knowledge for proper parameter extraction and circuit abstraction. Manual modeling requires a high level of circuit knowledge and often fails to capture critical effects stemming from block interactions and second order device effects. For this reason, it is of interest to extract the models directly from the SPICE level descriptions so that these effects and interactions can be properly captured. As the devices are scaled, process variations have a more profound effect on the circuit behaviors and performances. Creating behavior models from the SPICE level descriptions, which include input parameters and a large process variation space, is a non-trivial task. In this dissertation, we focus on addressing various problems related to the design automation of analog and mixed signal circuits. Analog circuits are typically highly specialized and fined tuned to fit the desired specifications for any given system reducing the reusability of circuits from design to design. This hinders the advancement of automating various aspects of analog design, test, and layout. At the core of many automation techniques, simulations, or data collection are required. Unfortunately, for some complex analog circuits, a single simulation may take many days. This prohibits performing any type of behavior characterization or verification of the circuit. This leads us to the first fundamental problem with the automation of analog devices. How can we reduce the simulation cost while maintaining the robustness of transistor level simulations? As analog circuits can vary vastly from one design to the next and are hardly ever comprised of standard library based building blocks, the second fundamental question is how to create automated processes that are general enough to be applied to all or most circuit types? Finally, what circuit characteristics can we utilize to enhance the automation procedures? The objective of this dissertation is to explore these questions and provide suitable evidence that they can be answered. We begin by exploring machine learning techniques to model the design space using minimal simulation effort. Circuit partitioning is employed to reduce the complexity of the machine learning algorithms. Using the same partitioning algorithm we further explore the behavior characterization of analog circuits undergoing process variation. The circuit partitioning is general enough to be used by any CMOS based analog circuit. The ideas and learning gained from behavioral modeling during behavior characterization are used to improve the simulation through event propagation, input space search, complexity and information measurements. The reduction of the input space and behavioral modeling of low complexity, low information primitive elements reduces the simulation time of large analog and mixed signal circuits by 50-75%. The method is extended and applied to assist in analyzing analog circuit layout. All of the proposed methods are implemented on analog circuits ranging from small benchmark circuits to large, highly complex and specialized circuits. The proposed dependency based partitioning of large analog circuits in the time domain allows for fast identification of highly sensitive transistors as well as provides a natural division of circuit components. Modeling analog circuits in the time domain with this partitioning technique and SVM learning algorithms allows for very fast transient behavior predictions, three orders of magnitude faster than traditional simulators, while maintaining 95% accuracy. Analog verification can be explored through a reduction of simulation time by utilizing the partitions, information and complexity measures, and input space reduction. Behavioral models are created using supervised learning techniques for detected primitive elements. We will show the effectiveness of the method on four analog circuits where the simulation time is decreased by 55-75%. Utilizing the reduced simulation method, critical nodes can be found quickly and efficiently. The nodes found using this method match those found by an experienced layout engineer, but are detected automatically given the design and input specifications. The technique is further extended to find the tolerance of transistors to both process variation and power supply fluctuation. This information allows for corrections in layout overdesign or guidance in placing noise reducing components such as guard rings or decoupling capacitors. The proposed approaches significantly reduce the simulation time required to perform the tasks traditionally, maintain high accuracy, and can be automated
    corecore