698 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationAggressive random testing tools, or fuzzers, are impressively effective at finding bugs in compilers and programming language runtimes. For example, a single test-case generator has resulted in more than 460 bugs reported for a number of production-quality C compilers. However, fuzzers can be hard to use. The first problem is that failures triggered by random test cases can be difficult to debug because these tests are often large. To report a compiler bug, one must often construct a small test case that triggers the bug. The existing automated test-case reduction technique, delta debugging, is not sufficient to produce small, reportable test cases. A second problem is that fuzzers are indiscriminate: they repeatedly find bugs that may not be severe enough to fix right away. Third, fuzzers tend to generate a large number of test cases that only trigger a few bugs. Some bugs are triggered much more frequently than others, creating needle-in-the-haystack problems. Currently, users rule out undesirable test cases using ad hoc methods such as disallowing problematic features in tests and filtering test results. This dissertation investigates approaches to improving the utility of compiler fuzzers. Two components, an aggressive test-case reducer and a tamer, are added to the fuzzing workflow to make the fuzzer more user friendly. We introduce C-Reduce, an aggressive test-case reducer for C/C++ programs, which exploits rich domain-specific knowledge to output test cases nearly as good as those produced by skilled humans. This reducer produces outputs that are, on average, more than 30 times smaller than those produced by the existing reducer that is most commonly used by compiler engineers. Second, this dissertation formulates and addresses the fuzzer taming problem: given a potentially large number of random test cases that trigger failures, order them such that diverse, interesting test cases are highly ranked. Bug triage can be effectively automated, relying on techniques from machine learning to suppress duplicate bug-triggering test cases and test cases triggering known bugs. An evaluation shows the ability of this tool to solve the fuzzer taming problem for 3,799 test cases triggering 46 bugs in a C compiler

    Survey on Machine Learning Algorithms Enhancing the Functional Verification Process

    Get PDF
    The continuing increase in functional requirements of modern hardware designs means the traditional functional verification process becomes inefficient in meeting the time-to-market goal with sufficient level of confidence in the design. Therefore, the need for enhancing the process is evident. Machine learning (ML) models proved to be valuable for automating major parts of the process, which have typically occupied the bandwidth of engineers; diverting them from adding new coverage metrics to make the designs more robust. Current research of deploying different (ML) models prove to be promising in areas such as stimulus constraining, test generation, coverage collection and bug detection and localization. An example of deploying artificial neural network (ANN) in test generation shows 24.5Ă— speed up in functionally verifying a dual-core RISC processor specification. Another study demonstrates how k-means clustering can reduce redundancy of simulation trace dump of an AHB-to-WHISHBONE bridge by 21%, thus reducing the debugging effort by not having to inspect unnecessary waveforms. The surveyed work demonstrates a comprehensive overview of current (ML) models enhancing the functional verification process from which an insight of promising future research areas is inferred

    Harnessing Simulation Acceleration to Solve the Digital Design Verification Challenge.

    Full text link
    Today, design verification is by far the most resource and time-consuming activity of any new digital integrated circuit development. Within this area, the vast majority of the verification effort in industry relies on simulation platforms, which are implemented either in hardware or software. A "simulator" includes a model of each component of a design and has the capability of simulating its behavior under any input scenario provided by an engineer. Thus, simulators are deployed to evaluate the behavior of a design under as many input scenarios as possible and to identify and debug all incorrect functionality. Two features are critical in simulators for the validation effort to be effective: performance and checking/debugging capabilities. A wide range of simulator platforms are available today: on one end of the spectrum there are software-based simulators, providing a very rich software infrastructure for checking and debugging the design's functionality, but executing only at 1-10 simulation cycles per second (while actual chips operate at GHz speeds). At the other end of the spectrum, there are hardware-based platforms, such as accelerators, emulators and even prototype silicon chips, providing higher performances by 4 to 9 orders of magnitude, at the cost of very limited or non-existent checking/debugging capabilities. As a result, today, simulation-based validation is crippled: one can either have satisfactory performance on hardware-accelerated platforms or critical infrastructures for checking/debugging on software simulators, but not both. This dissertation brings together these two ends of the spectrum by presenting solutions that offer high-performance simulation with effective checking and debugging capabilities. Specifically, it addresses the performance challenge of software simulators by leveraging inexpensive off-the-shelf graphics processors as massively parallel execution substrates, and then exposing the parallelism inherent in the design model to that architecture. For hardware-based platforms, the dissertation provides solutions that offer enhanced checking and debugging capabilities by abstracting the relevant data to be logged during simulation so to minimize the cost of collection, transfer and processing. Altogether, the contribution of this dissertation has the potential to solve the challenge of digital design verification by enabling effective high-performance simulation-based validation.PHDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/99781/1/dchatt_1.pd

    Development of anFPGA-based Data Reduction System for the Belle II DEPFET Pixel Detector

    Get PDF
    The innermost two layers of the Belle II detector at the KEKB collider in Tsukuba, Japan will be covered by highly granular DEPFET pixel sensors. The large number of pixels lead to a maximum data rate of 256 Gbps, which has to be significantly reduced by the Data Acquisition System. For data reduction, the hit information of the silicon-strip vertex detector surrounding the pixel detector is used to define so-called Regions of Interest (ROI) in the pixel detector. Only hit information of the pixels located inside these ROIs are saved. The ROIs for the pixel detector are computed by reconstructing track segments from strip data and extrapolation to the pixel detector. The goal is to achieve a reduction factor of up to 10 with this ROI selection. All the necessary processing stages, the receiving, decoding and multiplexing of SVD data on 48 optical fibers, the track reconstruction and the definition of the ROIs, will be performed by the DATCON system, developed in the scope of this thesis. The planned hardware design is based on a distributed set of Advanced Mezzanine Cards (AMC), each equipped with a Field Programmable Gate Array (FPGA) and four optical transceivers. An algorithm is developed based on a Hough Transformation, a commonly used pattern recognition method in image processing to identify the track segments in the strip detector and calculation of the track parameters. Using simulations, the performance of the developed algorithms are evaluated. For use in the DATCON system the Hough track reconstruction is implemented on FPGAs. Several tests of the modules required to create the ROIs are performed in a simulation environment and tested on the AMC hardware. After a line of successful tests, the DATCON prototype was used in two test beam campaigns to verify the concept and practice the integration with the other detector systems. The developed track reconstruction algorithm shows a high reconstruction efficiency down to low track momenta. A higher data reduction than originally intended was achieved within the limits of the available processing time. The FPGA track reconstruction algorithm is found to be even three times faster than demanded by the trigger rate of the experiment. The used concepts and developed algorithms are not specifically designed for the Belle II vertex detector only, but can be used in different experiments. It was successfully tested on the low-level trigger for Belle II, using drift chamber information and showed a comparably good track reconstruction performance

    The 10th Jubilee Conference of PhD Students in Computer Science

    Get PDF
    • …
    corecore