7,042 research outputs found

    TANDEM: A Taxonomy and a Dataset of Real-World Performance Bugs

    Get PDF
    The detection of performance bugs, like those causing an unexpected execution time, has gained much attention in the last years due to their potential impact in safety-critical and resource-constrained applications. Much effort has been put on trying to understand the nature of performance bugs in different domains as a starting point for the development of effective testing techniques. However, the lack of a widely accepted classification scheme of performance faults and, more importantly, the lack of well-documented and understandable datasets makes it difficult to draw rigorous and verifiable conclusions widely accepted by the community. In this paper, we present TANDEM, a dual contribution related to real-world performance bugs. Firstly, we propose a taxonomy of performance bugs based on a thorough systematic review of the related literature, divided into three main categories: effects, causes and contexts of bugs. Secondly, we provide a complete collection of fully documented real-world performance bugs. Together, these contributions pave the way for the development of stronger and reproducible research results on performance testing

    Memory error prevention through static analysis and type systems

    Get PDF
    Abstract. Modern software is everywhere and much relies on it so it is important that it is secure and reliable. As humans, software developers make mistakes that may be really difficult to detect. In memory unsafe languages a large part of these mistakes are related to memory usage and management. In order to reduce the amount of bugs in software this thesis looks into using static analysis tools and other methods to automatically find where these mistakes are or alternatively preventing them altogether. This is done through a literature review. Unfortunately, static analysis results in many false positives that can take a long time for developers to sift through. For this reason many static analysis tools augment their usefulness by inserting dynamic, runtime checks in places where they are uncertain whether there is an error or not. One final approach, discussed in this thesis, for securing software memory usage, is to employ type systems or memory safe languages like Java that are designed so that the programmer is not allowed to access raw memory and make mistakes related to it. The large amount of checks that these kinds of languages must always do, result in a reduction in performance. As such all of these approaches have benefits and limitations regarding their use. The major findings were that much research has been done in static analysis tools that have managed to detect real problems. Many of the developed tools are unfortunately not available, and the ones that are available haven’t been updated in a long time or they require complicated setup reducing their usefulness

    Towards Automated Performance Bug Identification in Python

    Full text link
    Context: Software performance is a critical non-functional requirement, appearing in many fields such as mission critical applications, financial, and real time systems. In this work we focused on early detection of performance bugs; our software under study was a real time system used in the advertisement/marketing domain. Goal: Find a simple and easy to implement solution, predicting performance bugs. Method: We built several models using four machine learning methods, commonly used for defect prediction: C4.5 Decision Trees, Na\"{\i}ve Bayes, Bayesian Networks, and Logistic Regression. Results: Our empirical results show that a C4.5 model, using lines of code changed, file's age and size as explanatory variables, can be used to predict performance bugs (recall=0.73, accuracy=0.85, and precision=0.96). We show that reducing the number of changes delivered on a commit, can decrease the chance of performance bug injection. Conclusions: We believe that our approach can help practitioners to eliminate performance bugs early in the development cycle. Our results are also of interest to theoreticians, establishing a link between functional bugs and (non-functional) performance bugs, and explicitly showing that attributes used for prediction of functional bugs can be used for prediction of performance bugs

    Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness

    Get PDF
    This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas

    Automated Fixing of Programs with Contracts

    Full text link
    This paper describes AutoFix, an automatic debugging technique that can fix faults in general-purpose software. To provide high-quality fix suggestions and to enable automation of the whole debugging process, AutoFix relies on the presence of simple specification elements in the form of contracts (such as pre- and postconditions). Using contracts enhances the precision of dynamic analysis techniques for fault detection and localization, and for validating fixes. The only required user input to the AutoFix supporting tool is then a faulty program annotated with contracts; the tool produces a collection of validated fixes for the fault ranked according to an estimate of their suitability. In an extensive experimental evaluation, we applied AutoFix to over 200 faults in four code bases of different maturity and quality (of implementation and of contracts). AutoFix successfully fixed 42% of the faults, producing, in the majority of cases, corrections of quality comparable to those competent programmers would write; the used computational resources were modest, with an average time per fix below 20 minutes on commodity hardware. These figures compare favorably to the state of the art in automated program fixing, and demonstrate that the AutoFix approach is successfully applicable to reduce the debugging burden in real-world scenarios.Comment: Minor changes after proofreadin

    Survey of Protections from Buffer-Overflow Attacks

    Get PDF
    Buffer-overflow attacks began two decades ago and persist today. Over that time, many solutions to provide protection from buffer-overflow attacks have been proposed by a number of researchers. They all aim to either prevent or protect against buffer-overflow attacks. As defenses improved, attacks adapted and became more sophisticated. Given the maturity of field and the fact that some solutions now exist that can prevent most buffer-overflow attacks, we believe it is time to survey these schemes and examine their critical issues. As part of this survey, we have grouped approaches into three board categories to provide a basis for understanding buffer-overflow protection schemes

    Cheetah: Detecting False Sharing Efficiently and Effectively

    Get PDF
    False sharing is a notorious performance problem that may occur in multithreaded programs when they are running on ubiquitous multicore hardware. It can dramatically degrade the performance by up to an order of magnitude, significantly hurting the scalability. Identifying false sharing in complex programs is challenging. Existing tools either incur significant performance overhead or do not provide adequate information to guide code optimization. To address these problems, we develop Cheetah, a profiler that detects false sharing both efficiently and effectively. Cheetah leverages the lightweight hardware performance monitoring units (PMUs) that are available in most modern CPU architectures to sample memory accesses. Cheetah develops the first approach to quantify the optimization potential of false sharing instances without actual fixes, based on the latency information collected by PMUs. Cheetah precisely reports false sharing and provides insightful optimization guidance for programmers, while adding less than 7% runtime overhead on average. Cheetah is ready for real deployment
    • …
    corecore