399 research outputs found

    Effective Detection of Sleep-in-Atomic-Context Bugs in the Linux Kernel

    Get PDF
    International audienceAtomic context is an execution state of the Linux kernel, in which kernel code monopolizes a CPU core. In this state, the Linux kernel may only perform operations that cannot sleep, as otherwise a system hang or crash may occur. We refer to this kind of concurrency bug as a sleep-in-atomic-context (SAC) bug. In practice, SAC bugs are hard to find, as they do not cause problems in all executions. In this paper, we propose a practical static approach named DSAC, to effectively detect SAC bugs in the Linux kernel. DSAC uses three key techniques: (1) a summary-based analysis to identify the code that may be executed in atomic context, (2) a connection-based alias analysis to identify the set of functions referenced by a function pointer, and (3) a path-check method to filter out repeated reports and false bugs. We evaluate DSAC on Linux 4.17, and find 1159 SAC bugs. We manually check all the bugs, and find that 1068 bugs are real. We have randomly selected 300 of the real bugs and sent them to kernel developers. 220 of these bugs have been confirmed, and 51 of our patches fixing 115 bugs have been applied

    DSAC: Effective Static Analysis of Sleep-in-Atomic-Context Bugs in Kernel Modules

    Get PDF
    International audienceIn a modern OS, kernel modules often use spinlocks and interrupt handlers to monopolize a CPU core to execute concurrent code in atomic context. In this situation, if the kernel module performs an operation that can sleep at runtime, a system hang may occur. We refer to this kind of concurrency bug as a sleep-in-atomic-context (SAC) bug. In practice, SAC bugs have received insufficient attention and are hard to find, as they do not always cause problems in real executions. In this paper, we propose a practical static approach named DSAC, to effectively detect SAC bugs and automatically recommend patches to help fix them. DSAC uses four key techniques: (1) a hybrid of flow-sensitive and-insensitive analysis to perform accurate and efficient code analysis; (2) a heuristics-based method to accurately extract kernel interfaces that can sleep at runtime; (3) a path-check method to effectively filter out repeated reports and false bugs; (4) a pattern-based method to automatically generate recommended patches to help fix the bugs. We evaluate DSAC on kernel modules (drivers, file systems, and network modules) of the Linux kernel, and on the FreeBSD and NetBSD kernels, and in total find 401 new real bugs. 272 of these bugs have been confirmed by the relevant kernel maintainers, and 43 patches generated by DSAC have been applied by kernel maintainers

    DCNS: Automated Detection of Conservative Non-Sleep Defects in the Linux Kernel

    Get PDF
    International audienceFor waiting, the Linux kernel offers both sleep-able and non-sleep operations. However, only non-sleep operations can be used in atomic context. Detecting the possibility of execution in atomic context requires a complete inter-procedural flow analysis, often involving function pointers. Developers may thus conservatively use non-sleep operations even outside of atomic context, which may damage system performance, as such operations unproductively monopolize the CPU. Until now, no systematic approach has been proposed to detect such conservative non-sleep (CNS) defects. In this paper, we propose a practical static approach, named DCNS, to automatically detect conservative non-sleep defects in the Linux kernel. DCNS uses a summary-based analysis to effectively identify the code in atomic context and a novel file-connection-based alias analysis to correctly identify the set of functions referenced by a function pointer. We evaluate DCNS on Linux 4.16, and in total find 1629 defects. We manually check 943 defects whose call paths are not so difficult to follow, and find that 890 are real. We have randomly selected 300 of the real defects and sent them to kernel developers, and 251 have been confirmed

    Understanding Concurrency Vulnerabilities in Linux Kernel

    Full text link
    While there is a large body of work on analyzing concurrency related software bugs and developing techniques for detecting and patching them, little attention has been given to concurrency related security vulnerabilities. The two are different in that not all bugs are vulnerabilities: for a bug to be exploitable, there needs be a way for attackers to trigger its execution and cause damage, e.g., by revealing sensitive data or running malicious code. To fill the gap, we conduct the first empirical study of concurrency vulnerabilities reported in the Linux operating system in the past ten years. We focus on analyzing the confirmed vulnerabilities archived in the Common Vulnerabilities and Exposures (CVE) database, which are then categorized into different groups based on bug types, exploit patterns, and patch strategies adopted by developers. We use code snippets to illustrate individual vulnerability types and patch strategies. We also use statistics to illustrate the entire landscape, including the percentage of each vulnerability type. We hope to shed some light on the problem, e.g., concurrency vulnerabilities continue to pose a serious threat to system security, and it is difficult even for kernel developers to analyze and patch them. Therefore, more efforts are needed to develop tools and techniques for analyzing and patching these vulnerabilities.Comment: It was finished in Oct 201

    Detecting Data Races Caused by Inconsistent Lock Protection in Device Drivers

    Get PDF
    International audienceData races are often hard to detect in device drivers, due to the non-determinism of concurrent execution. According to our study of Linux driver patches that fix data races, more than 38% of patches involve a pattern that we call inconsistent lock protection. Specifically, if a variable is accessed within two concurrently executed functions, the sets of locks held around each access are disjoint, at least one of the locksets is non-empty, and at least one of the involved accesses is a write, then a data race may occur.In this paper, we present a runtime analysis approach, named DILP, to detect data races caused by inconsistent lock protection in device drivers. By monitoring driver execution, DILP collects the information about runtime variable accesses and executed functions. Then after driver execution, DILP analyzes the collected information to detect and report data races caused by inconsistent lock protection. We evaluate DILP on 12 device drivers in Linux 4.16.9, and find 25 real data races

    Automatic Detection, Validation and Repair of Race Conditions in Interrupt-Driven Embedded Software

    Full text link
    Interrupt-driven programs are widely deployed in safety-critical embedded systems to perform hardware and resource dependent data operation tasks. The frequent use of interrupts in these systems can cause race conditions to occur due to interactions between application tasks and interrupt handlers (or two interrupt handlers). Numerous program analysis and testing techniques have been proposed to detect races in multithreaded programs. Little work, however, has addressed race condition problems related to hardware interrupts. In this paper, we present SDRacer, an automated framework that can detect, validate and repair race conditions in interrupt-driven embedded software. It uses a combination of static analysis and symbolic execution to generate input data for exercising the potential races. It then employs virtual platforms to dynamically validate these races by forcing the interrupts to occur at the potential racing points. Finally, it provides repair candidates to eliminate the detected races. We evaluate SDRacer on nine real-world embedded programs written in C language. The results show that SDRacer can precisely detect and successfully fix race conditions.Comment: This is a draft version of the published paper. Ke Wang provides suggestions for improving the paper and README of the GitHub rep

    Dynamic analysis for concurrent modern C/C++ applications

    Get PDF
    Concurrent programs are executed by multiple threads that run simultaneously. While this allows programs to run more efficiently by utilising multiple processors, it brings with it numerous complications. For example, a program may behave unpredictably or erroneously when multiple threads modify the same memory location in an uncoordinated manner. Issues such as this are difficult to avoid, and when introduced, can break the program in unpredictable ways. Programmers will therefore often turn towards automated tools to aide in the detection of concurrency bugs. The work presented in this thesis aims to provide methods to aid in the creation of tools for the purpose of finding and explaining concurrency bugs. In particular, the following studies have been conducted: Dynamic Race Detection for C/C++11 With the introduction of a weak memory model in C++, many tools that provide dynamic race detection have become outdated, and are unable to adequately identify data races. This work updates an existing data race detection algorithm such that it can identify data races according to this new definition. A method for allowing programs to explore many of the weak behaviours that this new memory model permits is also provided. Record and Replay Much work has gone into record and replay, however, most of this work is focussed on whole system replay, whereby a tool will aim to record as much of the program execution as possible. Contrasting this, the work presented here aims to record as little as possible. This sparse approach has many interesting implications: some programs that were previously out of reach for record and reply become tractable, and vice versa. To back this up, controlled scheduling is introduced that is capable of applying different scheduling strategies, which combined with the record and replay is beneficial for helping to root out bugs. Tool Support Both of the above techniques have been implemented in a tool, tsan11rec, that builds on the tsan dynamic race detection tool. A large experimental evaluation is presented investigating the effectiveness of the enhanced data race detection algorithm when applied to the Firefox and Chromium web browsers, and of the novel approach to record and replay when applied to a diverse set of concurrent applications.Open Acces

    Time Partitioning in Goblint: Extending region analysis with happens-before information

    Get PDF
    Seadmedraiverite ehk ohjurite paralleelne olemus muudab\n\rnendest vigade leidmise inimese jaoks väga keeruliseks. Staatiline analüsaator\n\rGoblint üritab automaatselt verifitseerida, et ohjuris puuduvad andmejooksud.\n\rSealjuures on suureks väljakutseks analüüsi täpsus. Käesolev töö arendab edasi\n\rGoblinti regioonianalüüsi, mis võimaldab arvesse võtta valdkonnale eriomaseid\n\rhappens-before tagatisi. Väljapakutud täienduse implementeerimise ning\n\rmuudatuste mõju analüüsimise aluseks on võetud Linuxi ohjurite alamhulk. Me\n\rnäitame, et mainitud edasiarendus suurendab Goblinti täpsust character tüüpi\n\rohjurite analüüsimisel.The concurrent nature of device drivers makes them notoriously\n\rdifficult to manually debug. Goblint, a static analysis framework tries to\n\rautomatically verify the inexistence of data races. The key challenge in doing\n\rthat is the precision of the analysis. This paper proposes an enhancement to the region analysis of Goblint to incorporate domain-specific happens-before\n\rguarantees. The proposed addition is implemented and evaluated on the Goblint benchmark suite. We show that the given enhancement increases the precision of Goblint when analysing character drivers

    EVALUATION OF CLASSICAL INTER-PROCESS COMMUNICATION PROBLEMS IN PARALLEL PROGRAMMING LANGUAGES

    Get PDF
    It is generally believed for the past several years that parallel programming is the future of computing technology due to its incredible speed and vastly superior performance as compared to classic linear programming. However, how sure are we that this is the case? Despite its aforesaid average superiority, usually parallel-program implementations run in single-processor machines, making the parallelism almost virtual. In this case, does parallel programming still remain superior?The purpose of this document is to research and analyze the performance, in both storage and speed, of three parallel-programming language libraries: OpenMP, OpenMPI and PThreads, along with a few other hybrids obtained by combining two of these three libraries. These analyses will be applied to three classical multi-process synchronization problems: Dining Philosophers, Producers-Consumers and Sleeping Barbers
    corecore