619 research outputs found

    SmartTrack: Efficient Predictive Race Detection

    Full text link
    Widely used data race detectors, including the state-of-the-art FastTrack algorithm, incur performance costs that are acceptable for regular in-house testing, but miss races detectable from the analyzed execution. Predictive analyses detect more data races in an analyzed execution than FastTrack detects, but at significantly higher performance cost. This paper presents SmartTrack, an algorithm that optimizes predictive race detection analyses, including two analyses from prior work and a new analysis introduced in this paper. SmartTrack's algorithm incorporates two main optimizations: (1) epoch and ownership optimizations from prior work, applied to predictive analysis for the first time; and (2) novel conflicting critical section optimizations introduced by this paper. Our evaluation shows that SmartTrack achieves performance competitive with FastTrack-a qualitative improvement in the state of the art for data race detection.Comment: Extended arXiv version of PLDI 2020 paper (adds Appendices A-E) #228 SmartTrack: Efficient Predictive Race Detectio

    Understanding Concurrency Vulnerabilities in Linux Kernel

    Full text link
    While there is a large body of work on analyzing concurrency related software bugs and developing techniques for detecting and patching them, little attention has been given to concurrency related security vulnerabilities. The two are different in that not all bugs are vulnerabilities: for a bug to be exploitable, there needs be a way for attackers to trigger its execution and cause damage, e.g., by revealing sensitive data or running malicious code. To fill the gap, we conduct the first empirical study of concurrency vulnerabilities reported in the Linux operating system in the past ten years. We focus on analyzing the confirmed vulnerabilities archived in the Common Vulnerabilities and Exposures (CVE) database, which are then categorized into different groups based on bug types, exploit patterns, and patch strategies adopted by developers. We use code snippets to illustrate individual vulnerability types and patch strategies. We also use statistics to illustrate the entire landscape, including the percentage of each vulnerability type. We hope to shed some light on the problem, e.g., concurrency vulnerabilities continue to pose a serious threat to system security, and it is difficult even for kernel developers to analyze and patch them. Therefore, more efforts are needed to develop tools and techniques for analyzing and patching these vulnerabilities.Comment: It was finished in Oct 201

    Accurately Classifying Data Races with Portend

    Get PDF
    Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Eliminating all data races from programs is impractical (e.g., system performance could suffer severely), yet spotting just the harmful ones is like finding a needle in a haystack: state-of-the-art data race detectors and classifiers suffer from high false positive rates of 37%–84%. We present Portend, a technique and system for automatically triaging suspect data races based on their potential consequences: Could they lead to crashes or hangs? Alter system state? Could their effects be externalized? Or are they harmless? Our proposed technique achieves very high accuracy by efficiently analyzing multiple paths and multiple thread schedules in combination, and by performing symbolic comparison between program states. We ran Portend on several dozen data races from real-world applications, and it correctly classified all of them, with no human effort. It also produced easy-to-understand evidence of the consequences of harmful races, thus proving their harmfulness and making debugging easier. We envision using Portend for testing and debugging, as well as for automatically triaging bug reports

    Data Races vs. Data Race Bugs: Telling the Difference with Portend

    Get PDF
    Even though most data races are harmless, the harmful ones are at the heart of some of the worst concurrency bugs. Alas, spotting just the harmful data races in programs is like finding a needle in a haystack: 76%-90% of the true data races reported by state-of-the- art race detectors turn out to be harmless [45]. We present Portend, a tool that not only detects races but also automatically classifies them based on their potential con- sequences: Could they lead to crashes or hangs? Could their effects be visible outside the program? Are they harmless? Our proposed technique achieves high accuracy by efficiently analyzing multiple paths and multiple thread schedules in combination, and by performing symbolic comparison between program outputs. We ran Portend on 7 real-world applications: it detected 93 true data races and correctly classified 92 of them, with no human effort. 6 of them are harmful races. Portend’s classification accuracy is up to 88% higher than that of existing tools, and it produces easy- to-understand evidence of the consequences of harmful races, thus both proving their harmfulness and making debugging easier. We envision Portend being used for testing and debugging, as well as for automatically triaging bug reports

    Reimagining Criminal Prosecution: Toward a Color-Conscious Professional Ethic for Prosecutors

    Get PDF
    Prosecutors, like mostAmericans, view the criminal-justice system asfundamentally race neutral. They are aware that blacks are stopped, searched, arrested, and locked up in numbers that are vastly out of proportion to their fraction of the overall population. Yet, they generally assume that this outcome is justified because it reflects the sad reality that blacks commit a disproportionate share of crime in America. They are unable to detect the ways in which their own discretionary choices-and those of other actors in the criminal-justice system, such as legislators, police officers, and jurors-contribute to the staggering and disproportionate incarceration of black Americans. In this Article, I aim to undermine this color-blind assessment of criminal justice and explain why prosecutors should embrace a color-conscious vision of their professional duties. Color consciousness is complex and multi-dimensional. It involves understanding the ways in which America\u27s long history of segregation generated the harsh socioeconomic conditions that lead so many young black males into a life of crime. It also demands awareness of the frequency of racial profiling and acknowledgment of the widely shared stereotypes that lead so many Americans to automatically perceive black men as potentially dangerous, violent and criminal. Finally, color consciousness recognizes the exclusion of blacks from political power and how this exclusion shapes the substantive content of the criminal law. Prosecutors should not only strive to acquire insight into how race operates in the criminal-justice system, but also to allow these insights to guide relevant aspects of their practice, including the ways in which they interact with police, charge crimes, negotiate plea agreements, and present their case to jurors. Taking these steps, particularly when they redound to the benefit of criminal suspects and defendants, would depart from the adversarial norm that largely defines the professional ethics of American lawyers. Normally, attorneys are expected to zealously represent the interests of their clients and to leave ultimate decisions about what is fair and true to the judge and jury. Prosecutors are different. They have a dual obligation to serve both as vigorous advocates within adversarial relationships and as officers ofjustice. Currently, no uniform guidelines exist as to the relative weight of the two components of prosecutors\u27 dual roles, so they must make complex judgments about how to negotiate the intrinsic dissonance of their professional identity in a range of different situations. This Article advances a context-specific argument that prosecutors and the institutions that supervise them should be more concerned with pursuing justice than with being a vigorous adversary when dealing with the subtle racial dimensions of their work

    Achieving the impossible

    Get PDF
    Smallpox feature article

    Trustworthiness in Mobile Cyber Physical Systems

    Get PDF
    Computing and communication capabilities are increasingly embedded in diverse objects and structures in the physical environment. They will link the ‘cyberworld’ of computing and communications with the physical world. These applications are called cyber physical systems (CPS). Obviously, the increased involvement of real-world entities leads to a greater demand for trustworthy systems. Hence, we use "system trustworthiness" here, which can guarantee continuous service in the presence of internal errors or external attacks. Mobile CPS (MCPS) is a prominent subcategory of CPS in which the physical component has no permanent location. Mobile Internet devices already provide ubiquitous platforms for building novel MCPS applications. The objective of this Special Issue is to contribute to research in modern/future trustworthy MCPS, including design, modeling, simulation, dependability, and so on. It is imperative to address the issues which are critical to their mobility, report significant advances in the underlying science, and discuss the challenges of development and implementation in various applications of MCPS

    Fast and Precise On-The-Fly Data Race Detection

    Get PDF
    While concurrent programming is quickly gaining popularity lately, developing bug-free programs is still challenging. Although developers have a wide choice of race detection tools available, we have found that the majority of these techniques do not scale well and developers are often forced to balance precision with speed. Additionally, various practical issues force even precise race detectors to produce spurious warnings, defeating their purpose and burdening their users. We design and implement a novel race detection technique that is both fast and precise, even in the face of missing program source information. Towards this goal, we have developed two separate tools, TREE and RDIT, that respectively improve performance and precision over existing techniques. TREE, implemented in the RoadRunner framework, acts as a filter and sends through only those events that might add value to race detection while eliminating those events which are deemed redundant for this purpose. All the while, removing these redundant events does not affect its race detection capability. We have evaluated TREE against a whole set of standard benchmarks, including two large real-world applications. We have found that there exists a significant number of redundant events in all these applications and on an average, TREE saves somewhere between 15-25% of analysis time as compared to the state-of-the-art techniques. Meanwhile, our next tool, RDIT, is able to precisely detect races in programs with incomplete source information, generating no false positives. RDIT is also maximal in the sense that it detects a maximal set of true races from the observed incomplete trace. It is underpinned by a sound BarrierPair model that abstracts away the missing events by capturing the invocation data of their enclosing methods. By making the least conservative assumption that a missing method introduces synchronization only when its invocation data overlaps with other missing methods, and by formulating maximal thread causality as a set of logical constraints, RDIT guarantees to precisely detect races with maximal capability. We tested RDIT against seven real-world large concurrent systems and have detected dozens of true races with zero false alarm. Comparatively, existing algorithms such as Happens-Before, Causal-Precede, and Maximal-Causality, which are all known to be precise, were observed reporting hundreds of false alarms due to trace incompleteness
    • …
    corecore