41 research outputs found

    Using Efficient Path Profiling to Optimize Memory Consumption of On-Chip Debugging for High-Level Synthesis

    Get PDF
    High-Level Synthesis (HLS) for FPGAs is attracting popularity and is increasingly used to handle complex systems with multiple integrated components. To increase performance and efficiency, HLS flows now adopt several advanced optimization techniques. Aggressive optimizations and system level integration can cause the introduction of bugs that are only observable on-chip. Debugging support for circuits generated with HLS is receiving a considerable attention. Among the data that can be collected on chip for debugging, one of the most important is the state of the Finite State Machines (FSM) controlling the components of the circuit. However, this usually requires a large amount of memory to trace the behavior during the execution. This work proposes an approach that takes advantage of the HLS information and of the structure of the FSM to compress control flow traces and to integrate optimized components for on-chip debugging. The generated checkers analyze the FSM execution on-fly, automatically notifying when a bug is detected, localizing it and providing data about its cause. The traces are compressed using a software profiling technique, called Efficient Path Profiling (EPP), adapted for the debugging of hardware accelerators generated with HLS. With this technique, the size of the memory used to store control flow traces can be reduced up to 2 orders of magnitude, compared to state-of-the-art

    Trace-based automated logical debugging for high-level synthesis generated circuits

    Get PDF
    In this paper we present an approach for debugging hardware designs generated by High-Level Synthesis (HLS), relieving users from the burden of identifying the signals to trace and from the error-prone task of manually checking the traces. The necessary steps are performed after HLS, independently of it and without affecting the synthesized design. For this reason our methodology should be easily adaptable to any HLS tools. The proposed approach makes full use of HLS compile time informations. The executions of the simulated design and the original C program can be compared, checking if there are discrepancies between values of C variables and signals in the design. The detection is completely automated, that is, it does not need any input but the program itself and the user does not have to know anything about the overall compilation process. The design can be validated on a given set of test cases and the discrepancies are detected by the tool. Relationships between the original high-level source code and the generated HDL are kept by the compiler and shown to the user. The granularity of such discrepancy analysis is per-operation and it includes the temporary variables inserted by the compiler. As a consequence the design can be debugged as is, with no restrictions on optimizations available during HLS. We show how this methodology can be used to identify different kind of bugs: 1) introduced by the HLS tool used for the synthesis; 2) introduced using buggy libraries of hardware components for HLS; 3) undefined behavior bugs in the original high-level source code

    Automation scripts for generic synthesis and co-simulation for Xronos generated HDLs

    Get PDF
    With growing developments in applications such as multimedia, technologies has pushed boundaries in the demands and popularity which used for a wide range of application domains. However, as complexity of development process of these applications that involve time-consuming and error-prone tasks increases, it is often scaled-up with development of the devices on the market to supports these applications. ORCC has helped in reducing the complexity of application developments by providing a set of modern tools based on dataflow programming. However, though Xronos generated HDLs has proven to be useful, yet it does not optimized to the resource utilization, power and timing analysis. It also yet to have a wide coverage across High-Level Synthesis (HLS) tools such as Altera Quartus and Synopsys Design Compiler. These coverages include the codes generated can be synthesize with other HDL tools. The critical part of this paper is generating of generic HDL for Altera Quartus or Synopsys Design Compiler from original HDL generated from ORCC and to provide co-simulation, automation environments for the RVC-CAL framework of Altera Quartus’s testbench by extracting simulation data that available from RVC-CAL framework, in which can be used to verify data from both ends (RVC-CAL’s and Quartus’s). Apart from that, in order to fully test generated code, comparison will be made with Xilinx Vivado HLS to benchmark functional test that are made with different HLS tools. This paper present the automation scripts that helps to produce better optimization of resource, power, and timing analysis from Xronos generated HDLs and providing automatic testbench that can be tested with Xilinx Vivado and other HDL tools

    Verified lifting of stencil computations

    Get PDF
    This paper demonstrates a novel combination of program synthesis and verification to lift stencil computations from low-level Fortran code to a high-level summary expressed using a predicate language. The technique is sound and mostly automated, and leverages counter-example guided inductive synthesis (CEGIS) to find provably correct translations. Lifting existing code to a high-performance description language has a number of benefits, including maintainability and performance portability. For example, our experiments show that the lifted summaries can enable domain specific compilers to do a better job of parallelization as compared to an off-the-shelf compiler working on the original code, and can even support fully automatic migration to hardware accelerators such as GPUs. We have implemented verified lifting in a system called STNG and have evaluated it using microbenchmarks, mini-apps, and real-world applications. We demonstrate the benefits of verified lifting by first automatically summarizing Fortran source code into a high-level predicate language, and subsequently translating the lifted summaries into Halide, with the translated code achieving median performance speedups of 4.1X and up to 24X for non-trivial stencils as compared to the original implementation.United States. Department of Energy. Office of Science (Award DE-SC0008923)United States. Department of Energy. Office of Science (Award DE-SC0005288

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Methoden und Beschreibungssprachen zur Modellierung und Verifikation vonSchaltungen und Systemen: MBMV 2015 - Tagungsband, Chemnitz, 03. - 04. März 2015

    Get PDF
    Der Workshop Methoden und Beschreibungssprachen zur Modellierung und Verifikation von Schaltungen und Systemen (MBMV 2015) findet nun schon zum 18. mal statt. Ausrichter sind in diesem Jahr die Professur Schaltkreis- und Systementwurf der Technischen Universität Chemnitz und das Steinbeis-Forschungszentrum Systementwurf und Test. Der Workshop hat es sich zum Ziel gesetzt, neueste Trends, Ergebnisse und aktuelle Probleme auf dem Gebiet der Methoden zur Modellierung und Verifikation sowie der Beschreibungssprachen digitaler, analoger und Mixed-Signal-Schaltungen zu diskutieren. Er soll somit ein Forum zum Ideenaustausch sein. Weiterhin bietet der Workshop eine Plattform für den Austausch zwischen Forschung und Industrie sowie zur Pflege bestehender und zur Knüpfung neuer Kontakte. Jungen Wissenschaftlern erlaubt er, ihre Ideen und Ansätze einem breiten Publikum aus Wissenschaft und Wirtschaft zu präsentieren und im Rahmen der Veranstaltung auch fundiert zu diskutieren. Sein langjähriges Bestehen hat ihn zu einer festen Größe in vielen Veranstaltungskalendern gemacht. Traditionell sind auch die Treffen der ITGFachgruppen an den Workshop angegliedert. In diesem Jahr nutzen zwei im Rahmen der InnoProfile-Transfer-Initiative durch das Bundesministerium für Bildung und Forschung geförderte Projekte den Workshop, um in zwei eigenen Tracks ihre Forschungsergebnisse einem breiten Publikum zu präsentieren. Vertreter der Projekte Generische Plattform für Systemzuverlässigkeit und Verifikation (GPZV) und GINKO - Generische Infrastruktur zur nahtlosen energetischen Kopplung von Elektrofahrzeugen stellen Teile ihrer gegenwärtigen Arbeiten vor. Dies bereichert denWorkshop durch zusätzliche Themenschwerpunkte und bietet eine wertvolle Ergänzung zu den Beiträgen der Autoren. [... aus dem Vorwort

    Improving Programming Support for Hardware Accelerators Through Automata Processing Abstractions

    Full text link
    The adoption of hardware accelerators, such as Field-Programmable Gate Arrays, into general-purpose computation pipelines continues to rise, driven by recent trends in data collection and analysis as well as pressure from challenging physical design constraints in hardware. The architectural designs of many of these accelerators stand in stark contrast to the traditional von Neumann model of CPUs. Consequently, existing programming languages, maintenance tools, and techniques are not directly applicable to these devices, meaning that additional architectural knowledge is required for effective programming and configuration. Current programming models and techniques are akin to assembly-level programming on a CPU, thus placing significant burden on developers tasked with using these architectures. Because programming is currently performed at such low levels of abstraction, the software development process is tedious and challenging and hinders the adoption of hardware accelerators. This dissertation explores the thesis that theoretical finite automata provide a suitable abstraction for bridging the gap between high-level programming models and maintenance tools familiar to developers and the low-level hardware representations that enable high-performance execution on hardware accelerators. We adopt a principled hardware/software co-design methodology to develop a programming model providing the key properties that we observe are necessary for success, namely performance and scalability, ease of use, expressive power, and legacy support. First, we develop a framework that allows developers to port existing, legacy code to run on hardware accelerators by leveraging automata learning algorithms in a novel composition with software verification, string solvers, and high-performance automata architectures. Next, we design a domain-specific programming language to aid programmers writing pattern-searching algorithms and develop compilation algorithms to produce finite automata, which supports efficient execution on a wide variety of processing architectures. Then, we develop an interactive debugger for our new language, which allows developers to accurately identify the locations of bugs in software while maintaining support for high-throughput data processing. Finally, we develop two new automata-derived accelerator architectures to support additional applications, including the detection of security attacks and the parsing of recursive and tree-structured data. Using empirical studies, logical reasoning, and statistical analyses, we demonstrate that our prototype artifacts scale to real-world applications, maintain manageable overheads, and support developers' use of hardware accelerators. Collectively, the research efforts detailed in this dissertation help ease the adoption and use of hardware accelerators for data analysis applications, while supporting high-performance computation.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155224/1/angstadt_1.pd

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Proceedings of the 22nd Conference on Formal Methods in Computer-Aided Design – FMCAD 2022

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing

    Hardware security design from circuits to systems

    Get PDF
    The security of hardware implementations is of considerable importance, as even the most secure and carefully analyzed algorithms and protocols can be vulnerable in their hardware realization. For instance, numerous successful attacks have been presented against the Advanced Encryption Standard, which is approved for top secret information by the National Security Agency. There are numerous challenges for hardware security, ranging from critical power and resource constraints in sensor networks to scalability and automation for large Internet of Things (IoT) applications. The physically unclonable function (PUF) is a promising building block for hardware security, as it exposes a device-unique challenge-response behavior which depends on process variations in fabrication. It can be used in a variety of applications including random number generation, authentication, fingerprinting, and encryption. The primary concerns for PUF are reliability in presence of environmental variations, area and power overhead, and process-dependent randomness of the challenge-response behavior. Carbon nanotube field-effect transistors (CNFETs) have been shown to have excellent electrical and unique physical characteristics. They are a promising candidate to replace silicon transistors in future very large scale integration (VLSI) designs. We present the Carbon Nanotube PUF (CNPUF), which is the first PUF design that takes advantage of unique CNFET characteristics. CNPUF achieves higher reliability against environmental variations and increases the resistance against modeling attacks. Furthermore, CNPUF has a considerable power and energy reduction in comparison to previous ultra-low power PUF designs of 89.6% and 98%, respectively. Moreover, CNPUF allows a power-security tradeoff in an extended design, which can greatly increase the resilience against modeling attacks. Despite increasing focus on defenses against physical attacks, consistent security oriented design of embedded systems remains a challenge, as most formalizations and security models are concerned with isolated physical components or a high-level concept. Therefore, we build on existing work on hardware security and provide four contributions to system-oriented physical defense: (i) A system-level security model to overcome the chasm between secure components and requirements of high-level protocols; this enables synergy between component-oriented security formalizations and theoretically proven protocols. (ii) An analysis of current practices in PUF protocols using the proposed system-level security model; we identify significant issues and expose assumptions that require costly security techniques. (iii) A System-of-PUF (SoP) that utilizes the large PUF design-space to achieve security requirements with minimal resource utilization; SoP requires 64% less gate-equivalent units than recently published schemes. (iv) A multilevel authentication protocol based on SoP which is validated using our system-level security model and which overcomes current vulnerabilities. Furthermore, this protocol offers breach recognition and recovery. Unpredictability and reliability are core requirements of PUFs: unpredictability implies that an adversary cannot sufficiently predict future responses from previous observations. Reliability is important as it increases the reproducibility of PUF responses and hence allows validation of expected responses. However, advanced machine-learning algorithms have been shown to be a significant threat to the practical validity of PUFs, as they can accurately model PUF behavior. The most effective technique was shown to be the XOR-based combination of multiple PUFs, but as this approach drastically reduces reliability, it does not scale well against software-based machine-learning attacks. We analyze threats to PUF security and propose PolyPUF, a scalable and secure architecture to introduce polymorphic PUF behavior. This architecture significantly increases model-building resistivity while maintaining reliability. An extensive experimental evaluation and comparison demonstrate that the PolyPUF architecture can secure various PUF configurations and is the only evaluated approach to withstand highly complex neural network machine-learning attacks. Furthermore, we show that PolyPUF consumes less energy and has less implementation overhead in comparison to lightweight reference architectures. Emerging technologies such as the Internet of Things (IoT) heavily rely on hardware security for data and privacy protection. The outsourcing of integrated circuit (IC) fabrication introduces diverse threat vectors with different characteristics, such that the security of each device has unique focal points. Hardware Trojan horses (HTH) are a significant threat for IoT devices as they process security critical information with limited resources. HTH for information leakage are particularly difficult to detect as they have minimal footprint. Moreover, constantly increasing integration complexity requires automatic synthesis to maintain the pace of innovation. We introduce the first high-level synthesis (HLS) flow that produces a threat-targeted and security enhanced hardware design to prevent HTH injection by a malicious foundry. Through analysis of entropy loss and criticality decay, the presented algorithms implement highly resource-efficient targeted information dispersion. An obfuscation flow is introduced to camouflage the effects of dispersion and reduce the effectiveness of reverse engineering. A new metric for the combined security of the device is proposed, and dispersion and obfuscation are co-optimized to target user-supplied threat parameters under resource constraints. The flow is evaluated on existing HLS benchmarks and a new IoT-specific benchmark, and shows significant resource savings as well as adaptability. The IoT and cloud computing rely on strong confidence in security of confidential or highly privacy sensitive data. As (differential) power attacks can take advantage of side-channel leakage to expose device-internal secrets, side-channel leakage is a major concern with ongoing research focus. However, countermeasures typically require expert-level security knowledge for efficient application, which limits adaptation in the highly competitive and time-constrained IoT field. We address this need by presenting the first HLS flow with primary focus on side-channel leakage reduction. Minimal security annotation to the high-level C-code is sufficient to perform automatic analysis of security critical operations with corresponding insertion of countermeasures. Additionally, imbalanced branches are detected and corrected. For practicality, the flow can meet both resource and information leakage constraints. The presented flow is extensively evaluated on established HLS benchmarks and a general IoT benchmark. Under identical resource constraints, leakage is reduced between 32% and 72% compared to the baseline. Under leakage target, the constraints are achieved with 31% to 81% less resource overhead
    corecore