27 research outputs found

    Towards Multidimensional Verification: Where Functional Meets Non-Functional

    Full text link
    Trends in advanced electronic systems' design have a notable impact on design verification technologies. The recent paradigms of Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) assume devices immersed in physical environments, significantly constrained in resources and expected to provide levels of security, privacy, reliability, performance and low power features. In recent years, numerous extra-functional aspects of electronic systems were brought to the front and imply verification of hardware design models in multidimensional space along with the functional concerns of the target system. However, different from the software domain such a holistic approach remains underdeveloped. The contributions of this paper are a taxonomy for multidimensional hardware verification aspects, a state-of-the-art survey of related research works and trends towards the multidimensional verification concept. The concept is motivated by an example for the functional and power verification dimensions.Comment: 2018 IEEE Nordic Circuits and Systems Conference (NORCAS): NORCHIP and International Symposium of System-on-Chip (SoC

    Understanding multidimensional verification: Where functional meets non-functional

    Get PDF
    Abstract Advancements in electronic systems' design have a notable impact on design verification technologies. The recent paradigms of Internet-of-Things (IoT) and Cyber-Physical Systems (CPS) assume devices immersed in physical environments, significantly constrained in resources and expected to provide levels of security, privacy, reliability, performance and low-power features. In recent years, numerous extra-functional aspects of electronic systems were brought to the front and imply verification of hardware design models in multidimensional space along with the functional concerns of the target system. However, different from the software domain such a holistic approach remains underdeveloped. The contributions of this paper are a taxonomy for multidimensional hardware verification aspects, a state-of-the-art survey of related research works and trends enabling the multidimensional verification concept. Further, an initial approach to perform multidimensional verification based on machine learning techniques is evaluated. The importance and challenge of performing multidimensional verification is illustrated by an example case study

    Tools and Algorithms for SoC Communication Traces

    Full text link
    In this paper, we study seven well-known trace analysis techniques both from the hardware and software domain and discuss their performance on communication-centric system-on-chip (SoC) traces. SoC traces are usually huge in size and concurrent in nature, therefore mining SoC traces poses additional challenges. We provide a hands-on discussion of the selected tools/algorithms in terms of the input, output, and analysis methods they employ. Hardware traces also varies in nature when observed in different level, this work can help developers/academicians to pick up the right techniques for their work. We take advantage of a synthetic trace generator to find the interestingness of the mined outcomes for each tool as well as we work with a realistic GEM5 set up to find the performance of these tools on more realistic SoC traces. Comprehensive analysis of the tool's performance and a benchmark trace dataset are also presented

    High-level synthesis of fine-grained weakly consistent C concurrency

    Get PDF
    High-level synthesis (HLS) is the process of automatically compiling high-level programs into a netlist (collection of gates). Given an input program, HLS tools exploit its inherent parallelism and pipelining opportunities to generate efficient customised hardware. C-based programs are the most popular input for HLS tools, but these tools historically only synthesise sequential C programs. As the appeal for software concurrency rises, HLS tools are beginning to synthesise concurrent C programs, such as C/C++ pthreads and OpenCL. Although supporting software concurrency leads to better hardware parallelism, shared memory synchronisation is typically serialised to ensure correct memory behaviour, via locks. Locks are safety resources that ensure exclusive access of shared memory, eliminating data races and providing synchronisation guarantees for programmers.  As an alternative to lock-based synchronisation, the C memory model also defines the possibility of lock-free synchronisation via fine-grained atomic operations (`atomics'). However, most HLS tools either do not support atomics at all or implement atomics using locks. Instead, we treat the synthesis of atomics as a scheduling problem. We show that we can augment the intra-thread memory constraints during memory scheduling of concurrent programs to support atomics. On average, hardware generated by our method is 7.5x faster than the state-of-the-art, for our set of experiments. Our method of synthesising atomics enables several unique possibilities. Chiefly, we are capable of supporting weakly consistent (`weak') atomics, which necessitate fewer ordering constraints compared to sequentially consistent (SC) atomics. However, implementing weak atomics is complex and error-prone and hence we formally verify our methods via automated model checking to ensure our generated hardware is correct. Furthermore, since the C memory model defines memory behaviour globally, we can globally analyse the entire program to generate its memory constraints. Additionally, we can also support loop pipelining by extending our methods to generate inter-iteration memory constraints. On average, weak atomics, global analysis and loop pipelining improve performance by 1.6x, 3.4x and 1.4x respectively, for our set of experiments. Finally, we present a case study of a real-world example via an HLS-based Google PageRank algorithm, whose performance improves by 4.4x via lock-free streaming and work-stealing.Open Acces

    MINING SECURE BEHAVIOR OF HARDWARE DESIGNS

    Get PDF
    Hardware presents an enticing target for attackers attempting to gain access to a secured com-puter system. Software-only exploits of hardware vulnerabilities may bypass software level secu-rity features. Hardware must be made secure. However, to understand whether a hardware designis secure, security specifications must be generated to define security on that design. Micro-architectural design elements, undocumented or under-documented features, debug interfaces,and information–flow side channels all may introduce new vulnerabilities. The secure behaviorof each must be specified in order ensure the design meets its security requirements and containsno vulnerabilities. However, manual efforts can be overwhelmed by design complexity, and manyhardware vulnerabilities, such as Memory Sinkhole, SYSRET privilege escalation, and mostrecently Spectre/Meltdown, persisted in product lines for decades despite extensive testing. Anautomated solution is needed to specify secure designs.Specification mining offers a solution by automating security specification for hardware.Specification miners use a form of machine learning to specify behaviors of a system by studyinga system in execution. However, specification mining was first developed for use with software.Complex hardware designs offer unique challenges for this technique. Further, specificationminers traditionally capture functional specifications without a notion of security, and may notuse the specification logics necessary to describe some security requirements.This work demonstrates specification mining for hardware security. On CISC architecturessuch as x86, I demonstrate that a miner partitioning the design state space along control signalsdiscovers a specification that includes manually defined properties and, if followed, would secureCPU designs against Memory Sinkhole and SYSRET privilege escalation. For temporal prop-erties, I demonstrate that a miner using security specific linear temporal logic (LTL) templatesfor specification detection may find properties that, if followed, would secure designs againsthistorical documented security vulnerabilities and against potential future attacks targeting sys-tem initialization. For information–flow hyperproperties, I demonstrate that a miner may useInformation Flow Tracking (IFT) to develop output properties containing designer specifiedinformation–flow security properties as well as properties that demonstrate a design does notcontain certain Common Weakness Enumerations (CWEs).Doctor of Philosoph

    A Reactive and Cycle-True IP Emulator for MPSoC Exploration

    Get PDF
    The design of MultiProcessor Systems-on-Chip (MPSoC) emphasizes intellectual-property (IP)-based communication-centric approaches. Therefore, for the optimization of the MPSoC interconnect, the designer must develop traffic models that realistically capture the application behavior as executing on the IP core. In this paper, we introduce a Reactive IP Emulator (RIPE) that enables an effective emulation of the IP-core behavior in multiple environments, including bitand cycle-true simulation. The RIPE is built as a multithreaded abstract instruction-set processor, and it can generate reactive traffic patterns. We compare the RIPE models with cycle-true functional simulation of complex application behavior (tasksynchronization, multitasking, and input/output operations). Our results demonstrate high-accuracy and significant speedups. Furthermore, via a case study, we show the potential use of the RIPE in a design-space-exploration context

    Reining in the Functional Verification of Complex Processor Designs with Automation, Prioritization, and Approximation

    Full text link
    Our quest for faster and efficient computing devices has led us to processor designs with enormous complexity. As a result, functional verification, which is the process of ascertaining the correctness of a processor design, takes up a lion's share of the time and cost spent on making processors. Unfortunately, functional verification is only a best-effort process that cannot completely guarantee the correctness of a design, often resulting in defective products that may have devastating consequences.Functional verification, as practiced today, is unable to cope with the complexity of current and future processor designs. In this dissertation, we identify extensive automation as the essential step towards scalable functional verification of complex processor designs. Moreover, recognizing that a complete guarantee of design correctness is impossible, we argue for systematic prioritization and prudent approximation to realize fast and far-reaching functional verification solutions. We partition the functional verification effort into three major activities: planning and test generation, test execution and bug detection, and bug diagnosis. Employing a perspective we refer to as the automation, prioritization, and approximation (APA) approach, we develop solutions that tackle challenges across these three major activities. In pursuit of efficient planning and test generation for modern systems-on-chips, we develop an automated process for identifying high-priority design aspects for verification. In addition, we enable the creation of compact test programs, which, in our experiments, were up to 11 times smaller than what would otherwise be available at the beginning of the verification effort. To tackle challenges in test execution and bug detection, we develop a group of solutions that enable the deployment of automatic and robust mechanisms for catching design flaws during high-speed functional verification. By trading accuracy for speed, these solutions allow us to unleash functional verification platforms that are over three orders of magnitude faster than traditional platforms, unearthing design flaws that are otherwise impossible to reach. Finally, we address challenges in bug diagnosis through a solution that fully automates the process of pinpointing flawed design components after detecting an error. Our solution, which identifies flawed design units with over 70% accuracy, eliminates weeks of diagnosis effort for every detected error.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/137057/1/birukw_1.pd
    corecore