20 research outputs found

    Multicore Acceleration for Priority Based Schedulers for Concurrency Bug Detection

    Get PDF
    Testing multithreaded programs is difficult as threads can interleave in a nondeterministic fashion. Untested interleavings can cause failures, but testing all interleavings is infeasible. Many interleaving exploration strategies for bug detection have been proposed, but their relative effectiveness and performance remains unclear as they often lack publicly available implementations and have not been evaluated using common benchmarks. We describe NeedlePoint, an open-source framework that allows selection and comparison of a wide range of interleaving exploration policies for bug detection proposed by prior work. Our experience with NeedlePoint indicates that priority-based probabilistic concurrency testing (the PCT algorithm) finds bugs quickly, but it runs only one thread at a time, which destroys parallelism by serializing executions. To address this problem we propose a parallel version of the PCT algorithm (PPCT).We show that the new algorithm outperforms the original by a factor of 5x when testing parallel programs on an eight-core machine. We formally prove that parallel PCT provides the same probabilistic coverage guarantees as PCT. Moreover, PPCT is the first algorithm that runs multiple threads while providing coverage guarantees

    Towards Generating Functionally Correct Code Edits from Natural Language Issue Descriptions

    Full text link
    Large language models (LLMs), such as OpenAI's Codex, have demonstrated their potential to generate code from natural language descriptions across a wide range of programming tasks. Several benchmarks have recently emerged to evaluate the ability of LLMs to generate functionally correct code from natural language intent with respect to a set of hidden test cases. This has enabled the research community to identify significant and reproducible advancements in LLM capabilities. However, there is currently a lack of benchmark datasets for assessing the ability of LLMs to generate functionally correct code edits based on natural language descriptions of intended changes. This paper aims to address this gap by motivating the problem NL2Fix of translating natural language descriptions of code changes (namely bug fixes described in Issue reports in repositories) into correct code fixes. To this end, we introduce Defects4J-NL2Fix, a dataset of 283 Java programs from the popular Defects4J dataset augmented with high-level descriptions of bug fixes, and empirically evaluate the performance of several state-of-the-art LLMs for the this task. Results show that these LLMS together are capable of generating plausible fixes for 64.6% of the bugs, and the best LLM-based technique can achieve up to 21.20% top-1 and 35.68% top-5 accuracy on this benchmark

    A Framework for Fine-Grained Synchronization of Dependent GPU Kernels

    Full text link
    Machine Learning (ML) models contain highly-parallel computations, such as, Matrix Multiplication, Convolutions, Dropout, etc. These computations are commonly executed on Graphics Processing Units (GPUs), by dividing the computation in independent processing blocks, known as tiles. Since the number of tiles are usually higher than the execution units of a GPU, tiles are executed on all execution units in waves. However, the tiles executed in the last wave can under-utilize the execution units because tiles are not always a multiple of execution units. This under-utilization can be reduced by executing multiple independent kernels concurrently on a GPU, but is not currently possible for dependent kernels. In this paper, we present cuSync, a framework to write custom fine-grained synchronization policies for dependent kernels to improve GPU utilization. cuSync synchronizes tiles instead of kernels, which allows executing tiles of multiple dependent kernels. Using cuSync we expressed several synchronization policies in a few lines of code and reduced the inference times of GPT-3 and ResNet-38 by up to 1.19x and 1.16x respectively

    Jumping the ORDER BY Barrier in Large-Scale Pattern Matching

    Get PDF
    Event-series pattern matching is a major component of large-scale data analytics pipelines enabling a wide range of system diagnostics tasks. A precursor to pattern matching is an expensive ``shuffle the world'' stage wherein data are ordered by time and shuffled across the network. Because many existing systems treat the pattern matching engine as a black box, they are unable to optimizing the entire data analytics pipeline, and in particular, this costly shuffle. This paper demonstrates how to optimize such queries. We first translate an expressive class of regular-expression like patterns to relational queries such that they can benefit from decades of progress in relational optimizers, and then we introduce the technique of abstract pattern matching, a linear time preprocessing step which, adapting ideas from symbolic execution and abstract interpretation, discards events from the input guaranteed not to appear in successful matches. Abstract pattern matching first computes a conservative representation of the output-relevant domain of every transition in a pattern based on the (unary) predicates of that transition. It then further refines these domains based on the structure of the pattern (i.e., paths through the pattern) as well as any of the pattern's join predicates across transitions. The outcome is an abstract filter that when applied to the original stream excludes events that are guaranteed not to participate in a match. We implemented and applied abstract pattern matching in COSMOS/Scope to an industrial benchmark where we obtained up to 3 orders of magnitude reduction in shuffled data and 1.23x average speedup in total processing time

    TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches

    Full text link
    Machine learning models are increasingly being trained across multiple GPUs and multiple machines. In this setting, data is transferred between GPUs using communication collectives such as AlltoAll and AllReduce, which can become a significant bottleneck in large models. It is important to use efficient algorithms for collective communication. We introduce TACCL, a tool that allows algorithm designers to guide a synthesizer into automatically generating algorithms for a given hardware configuration and communication collective. TACCL uses the novel communication sketch abstraction to obtain crucial information from the designer that is used to significantly reduce the state space and guide the synthesizer towards better algorithms. TACCL also uses a novel encoding of the problem that allows it to scale beyond single-node topologies. We use TACCL to synthesize algorithms for three collectives and two hardware topologies: DGX-2 and NDv2. We demonstrate that the algorithms synthesized by TACCL outperform the NVIDIA Collective Communication Library (NCCL) by up to 6.7×\times. We also show that TACCL can speed up end-to-end training of Transformer-XL and BERT models by 11%--2.3×\times for different batch sizes.Comment: Accepted at NSDI'23. Contains 17 pages, 11 figures, including Appendi
    corecore