362 research outputs found

    Simulating operational memory models using off-the-shelf program analysis tools

    Get PDF
    Memory models allow reasoning about the correctness of multithreaded programs. Constructing and using such models is facilitated by simulators that reveal which behaviours of a given program are allowed. While extensive work has been done on simulating axiomatic memory models, there has been less work on simulation of operational models. Operational models are often considered more intuitive than axiomatic models, but are challenging to simulate due to the vast number of paths through the model’s transition system. Observing that a similar path-explosion problem is tackled by program analysis tools, we investigate the idea of reducing the decision problem of “whether a given memory model allows a given behaviour” to the decision problem of “whether a given C program is safe”, which can be handled by a variety of off-the-shelf tools. We report on our experience using multiple program analysis tools for C for this purpose—a model checker (CBMC), a symbolic execution tool (KLEE), and three coverage-guided fuzzers (libFuzzer, Centipede and AFL++)—presenting two case-studies. First, we evaluate the performance and scalability of these tools in the context of the x86 memory model, showing that fuzzers offer performance competitive with that of RMEM, a state-of-the-art bespoke memory model simulator. Second, we study a more complex, recently developed memory model for hybrid CPU/FPGA devices for which no bespoke simulator is available. We highlight how different encoding strategies can aid the various tools and show how our approach allows us to simulate the CPU/FPGA model twice as deeply as in prior work, leading to us finding and fixing several infidelities in the model. We also experimented with applying three analysis tools that won the “falsification” category in the 2023 Annual Software Verification Competition (SV-COMP). We found that these tools do not scale to our use cases, motivating us to submit example C programs arising from our work for inclusion in the set of SV-COMP benchmarks, so that they can serve as challenge examples

    Analytical Modeling of High Performance Reconfigurable Computers: Prediction and Analysis of System Performance.

    Get PDF
    The use of a network of shared, heterogeneous workstations each harboring a Reconfigurable Computing (RC) system offers high performance users an inexpensive platform for a wide range of computationally demanding problems. However, effectively using the full potential of these systems can be challenging without the knowledge of the system’s performance characteristics. While some performance models exist for shared, heterogeneous workstations, none thus far account for the addition of Reconfigurable Computing systems. This dissertation develops and validates an analytic performance modeling methodology for a class of fork-join algorithms executing on a High Performance Reconfigurable Computing (HPRC) platform. The model includes the effects of the reconfigurable device, application load imbalance, background user load, basic message passing communication, and processor heterogeneity. Three fork-join class of applications, a Boolean Satisfiability Solver, a Matrix-Vector Multiplication algorithm, and an Advanced Encryption Standard algorithm are used to validate the model with homogeneous and simulated heterogeneous workstations. A synthetic load is used to validate the model under various loading conditions including simulating heterogeneity by making some workstations appear slower than others by the use of background loading. The performance modeling methodology proves to be accurate in characterizing the effects of reconfigurable devices, application load imbalance, background user load and heterogeneity for applications running on shared, homogeneous and heterogeneous HPRC resources. The model error in all cases was found to be less than five percent for application runtimes greater than thirty seconds and less than fifteen percent for runtimes less than thirty seconds. The performance modeling methodology enables us to characterize applications running on shared HPRC resources. Cost functions are used to impose system usage policies and the results of vii the modeling methodology are utilized to find the optimal (or near-optimal) set of workstations to use for a given application. The usage policies investigated include determining the computational costs for the workstations and balancing the priority of the background user load with the parallel application. The applications studied fall within the Master-Worker paradigm and are well suited for a grid computing approach. A method for using NetSolve, a grid middleware, with the model and cost functions is introduced whereby users can produce optimal workstation sets and schedules for Master-Worker applications running on shared HPRC resources

    Solving graph coloring and SAT problems using field programmable gate arrays.

    Get PDF
    Chu-Keung Chung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 88-92).Abstracts in English and Chinese.Abstract --- p.iAcknowledgments --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Motivation and Aims --- p.1Chapter 1.2 --- Contributions --- p.3Chapter 1.3 --- Structure of the Thesis --- p.4Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Introduction --- p.6Chapter 2.2 --- Complete Algorithms --- p.7Chapter 2.2.1 --- Parallel Checking --- p.7Chapter 2.2.2 --- Mom's --- p.8Chapter 2.2.3 --- Davis-Putnam --- p.9Chapter 2.2.4 --- Nonchronological Backtracking --- p.9Chapter 2.2.5 --- Iterative Logic Array (ILA) --- p.10Chapter 2.3 --- Incomplete Algorithms --- p.11Chapter 2.3.1 --- GENET --- p.11Chapter 2.3.2 --- GSAT --- p.12Chapter 2.4 --- Summary --- p.13Chapter 3 --- Algorithms --- p.14Chapter 3.1 --- Introduction --- p.14Chapter 3.2 --- Tree Search Techniques --- p.14Chapter 3.2.1 --- Depth First Search --- p.15Chapter 3.2.2 --- Forward Checking --- p.16Chapter 3.2.3 --- Davis-Putnam --- p.17Chapter 3.2.4 --- GRASP --- p.19Chapter 3.3 --- Incomplete Algorithms --- p.20Chapter 3.3.1 --- GENET --- p.20Chapter 3.3.2 --- GSAT Algorithm --- p.22Chapter 3.4 --- Summary --- p.23Chapter 4 --- Field Programmable Gate Arrays --- p.24Chapter 4.1 --- Introduction --- p.24Chapter 4.2 --- FPGA --- p.24Chapter 4.2.1 --- Xilinx 4000 series FPGAs --- p.26Chapter 4.2.2 --- Bitstream --- p.31Chapter 4.3 --- Giga Operations Reconfigurable Computing Platform --- p.32Chapter 4.4 --- Annapolis Wildforce PCI board --- p.33Chapter 4.5 --- Summary --- p.35Chapter 5 --- Implementation --- p.36Chapter 5.1 --- Parallel Graph Coloring Machine --- p.36Chapter 5.1.1 --- System Architecture --- p.38Chapter 5.1.2 --- Evaluator --- p.39Chapter 5.1.3 --- Finite State Machine (FSM) --- p.42Chapter 5.1.4 --- Memory --- p.43Chapter 5.1.5 --- Hardware Resources --- p.43Chapter 5.2 --- Serial Graph Coloring Machine --- p.44Chapter 5.2.1 --- System Architecture --- p.44Chapter 5.2.2 --- Input Memory --- p.46Chapter 5.2.3 --- Solution Store --- p.46Chapter 5.2.4 --- Constraint Memory --- p.47Chapter 5.2.5 --- Evaluator --- p.48Chapter 5.2.6 --- Input Mapper --- p.49Chapter 5.2.7 --- Output Memory --- p.49Chapter 5.2.8 --- Backtrack Checker --- p.50Chapter 5.2.9 --- Word Generator --- p.51Chapter 5.2.10 --- State Machine --- p.51Chapter 5.2.11 --- Hardware Resources --- p.54Chapter 5.3 --- Serial Boolean Satisfiability Solver --- p.56Chapter 5.3.1 --- System Architecture --- p.58Chapter 5.3.2 --- Solutions --- p.59Chapter 5.3.3 --- Solution Generator --- p.59Chapter 5.3.4 --- Evaluator --- p.60Chapter 5.3.5 --- AND/OR --- p.62Chapter 5.3.6 --- State Machine --- p.62Chapter 5.3.7 --- Hardware Resources --- p.64Chapter 5.4 --- GSAT Solver --- p.65Chapter 5.4.1 --- System Architecture --- p.65Chapter 5.4.2 --- Variable Memory --- p.65Chapter 5.4.3 --- Flip-Bit Vector --- p.66Chapter 5.4.4 --- Clause Evaluator --- p.67Chapter 5.4.5 --- Adder --- p.70Chapter 5.4.6 --- Random Bit Generator --- p.71Chapter 5.4.7 --- Comparator --- p.71Chapter 5.4.8 --- Sum Register --- p.71Chapter 5.5 --- Summary --- p.71Chapter 6 --- Results --- p.73Chapter 6.1 --- Introduction --- p.73Chapter 6.2 --- Parallel Graph Coloring Machine --- p.73Chapter 6.3 --- Serial Graph Coloring Machine --- p.74Chapter 6.4 --- Serial SAT Solver --- p.74Chapter 6.5 --- GSAT Solver --- p.75Chapter 6.6 --- Summary --- p.76Chapter 7 --- Conclusion --- p.77Chapter 7.1 --- Future Work --- p.78Chapter A --- Software Implementation of Graph Coloring in CHIP --- p.79Chapter B --- Density Improvements Using Xilinx RAM --- p.81Chapter C --- Bit stream Configuration --- p.83Bibliography --- p.88Publications --- p.9

    Study of Fine-Grained, Irregular Parallel Applications on a Many-Core Processor

    Get PDF
    This dissertation demonstrates the possibility of obtaining strong speedups for a variety of parallel applications versus the best serial and parallel implementations on commodity platforms. These results were obtained using the PRAM-inspired Explicit Multi-Threading (XMT) many-core computing platform, which is designed to efficiently support execution of both serial and parallel code and switching between the two. Biconnectivity: For finding the biconnected components of a graph, we demonstrate speedups of 9x to 33x on XMT relative to the best serial algorithm using a relatively modest silicon budget. Further evidence suggests that speedups of 21x to 48x are possible. For graph connectivity, we demonstrate that XMT outperforms two contemporary NVIDIA GPUs of similar or greater silicon area. Prior studies of parallel biconnectivity algorithms achieved at most a 4x speedup, but we could not find biconnectivity code for GPUs to compare biconnectivity against them. Triconnectivity: We present a parallel solution to the problem of determining the triconnected components of an undirected graph. We obtain significant speedups on XMT over the only published optimal (linear-time) serial implementation of a triconnected components algorithm running on a modern CPU. To our knowledge, no other parallel implementation of a triconnected components algorithm has been published for any platform. Burrows-Wheeler compression: We present novel work-optimal parallel algorithms for Burrows-Wheeler compression and decompression of strings over a constant alphabet and their empirical evaluation. To validate these theoretical algorithms, we implement them on XMT and show speedups of up to 25x for compression, and 13x for decompression, versus bzip2, the de facto standard implementation of Burrows-Wheeler compression. Fast Fourier transform (FFT): Using FFT as an example, we examine the impact that adoption of some enabling technologies, including silicon photonics, would have on the performance of a many-core architecture. The results show that a single-chip many-core processor could potentially outperform a large high-performance computing cluster. Boosted decision trees: This chapter focuses on the hybrid memory architecture of the XMT computer platform, a key part of which is a flexible all-to-all interconnection network that connects processors to shared memory modules. First, to understand some recent advances in GPU memory architecture and how they relate to this hybrid memory architecture, we use microbenchmarks including list ranking. Then, we contrast the scalability of applications with that of routines. In particular, regardless of the scalability needs of full applications, some routines may involve smaller problem sizes, and in particular smaller levels of parallelism, perhaps even serial. To see how a hybrid memory architecture can benefit such applications, we simulate a computer with such an architecture and demonstrate the potential for a speedup of 3.3X over NVIDIA's most powerful GPU to date for XGBoost, an implementation of boosted decision trees, a timely machine learning approach. Boolean satisfiability (SAT): SAT is an important performance-hungry problem with applications in many problem domains. However, most work on parallelizing SAT solvers has focused on coarse-grained, mostly embarrassing parallelism. Here, we study fine-grained parallelism that can speed up existing sequential SAT solvers. We show the potential for speedups of up to 382X across a variety of problem instances. We hope that these results will stimulate future research

    Proceedings of the 21st Conference on Formal Methods in Computer-Aided Design – FMCAD 2021

    Get PDF
    The Conference on Formal Methods in Computer-Aided Design (FMCAD) is an annual conference on the theory and applications of formal methods in hardware and system verification. FMCAD provides a leading forum to researchers in academia and industry for presenting and discussing groundbreaking methods, technologies, theoretical results, and tools for reasoning formally about computing systems. FMCAD covers formal aspects of computer-aided system design including verification, specification, synthesis, and testing
    corecore