825 research outputs found

    Static Analysis in Practice

    Get PDF
    Static analysis tools search software looking for defects that may cause an application to deviate from its intended behavior. These include defects that compute incorrect values, cause runtime exceptions or crashes, expose applications to security vulnerabilities, or lead to performance degradation. In an ideal world, the analysis would precisely identify all possible defects. In reality, it is not always possible to infer the intent of a software component or code fragment, and static analysis tools sometimes output spurious warnings or miss important bugs. As a result, tool makers and researchers focus on developing heuristics and techniques to improve speed and accuracy. But, in practice, speed and accuracy are not sufficient to maximize the value received by software makers using static analysis. Software engineering teams need to make static analysis an effective part of their regular process. In this dissertation, I examine the ways static analysis is used in practice by commercial and open source users. I observe that effectiveness is hampered, not only by false warnings, but also by true defects that do not affect software behavior in practice. Indeed, mature production systems are often littered with true defects that do not prevent them from functioning, mostly correctly. To understand why this occurs, observe that developers inadvertently create both important and unimportant defects when they write software, but most quality assurance activities are directed at finding the important ones. By the time the system is mature, there may still be a few consequential defects that can be found by static analysis, but they are drowned out by the many true but low impact defects that were never fixed. An exception to this rule is certain classes of subtle security, performance, or concurrency defects that are hard to detect without static analysis. Software teams can use static analysis to find defects very early in the process, when they are cheapest to fix, and in so doing increase the effectiveness of later quality assurance activities. But this effort comes with costs that must be managed to ensure static analysis is worthwhile. The cost effectiveness of static analysis also depends on the nature of the defect being sought, the nature of the application, the infrastructure supporting tools, and the policies governing its use. Through this research, I interact with real users through surveys, interviews, lab studies, and community-wide reviews, to discover their perspectives and experiences, and to understand the costs and challenges incurred when adopting static analysis tools. I also analyze the defects found in real systems and make observations about which ones are fixed, why some seemingly serious defects persist, and what considerations static analysis tools and software teams should make to increase effectiveness. Ultimately, my interaction with real users confirms that static analysis is well received and useful in practice, but the right environment is needed to maximize its return on investment

    Infrared: A Meta Bug Detector

    Full text link
    The recent breakthroughs in deep learning methods have sparked a wave of interest in learning-based bug detectors. Compared to the traditional static analysis tools, these bug detectors are directly learned from data, thus, easier to create. On the other hand, they are difficult to train, requiring a large amount of data which is not readily available. In this paper, we propose a new approach, called meta bug detection, which offers three crucial advantages over existing learning-based bug detectors: bug-type generic (i.e., capable of catching the types of bugs that are totally unobserved during training), self-explainable (i.e., capable of explaining its own prediction without any external interpretability methods) and sample efficient (i.e., requiring substantially less training data than standard bug detectors). Our extensive evaluation shows our meta bug detector (MBD) is effective in catching a variety of bugs including null pointer dereference, array index out-of-bound, file handle leak, and even data races in concurrent programs; in the process MBD also significantly outperforms several noteworthy baselines including Facebook Infer, a prominent static analysis tool, and FICS, the latest anomaly detection method

    FindBugsin arviointi ja tehostaminen kehittyneissä ohjelmistoissa; Tapaustutkimus Valuatumilla

    Get PDF
    Static code analysis (SCA) is a popular bug detection technique. However, several problems slow down the adoption of SCA. First of all, when first applying SCA to a mature software system, the SCA tools tend to report a large number of alerts which developers do not act on. Second, it is unclear how effective SCA is to find real defects. Therefore, we decided to conduct a case study in Valuatum to evaluate and enhance the effectiveness of FindBugs, a popular SCA tool for Java. The main goal of this thesis is to learn how to make FindBugs as an effective tool providing immediate, useful feedback for developers in Valuatum. We have used several approaches to study FindBugs. First, we have analyzed how many and what types of fixed defects could have been prevented with FindBugs. Second, we have developed custom detectors for the most important defects missed by FindBugs. Third, we have studied the precision of FindBugs to detect open defects. Last, we have presented several approaches, such as defect differencing and IDE integration, to deal with the large number of alerts. The results indicate that FindBugs is not very effective in detecting fixed defects. We estimated that 9-16% of the fixed bugs should be feasible to detect with SCA. However, only 0 - 2% of the reported fixed bugs and 1 - 6% of the unreported fixed bugs could have been prevented with FindBugs. Moreover, only 18.5% of the high-priority open alerts are considered actionable. Nevertheless, we think FindBugs is a cost-effective tool, because it detected several important open issues and can be enhanced with custom detectors.Staattinen koodianalyysi (SCA) on suosittu menetelmä ohjelmistovirheiden eli bugien etsinnässä. Sen käyttöönottoa kuitenkin haittaavat useat ongelmat. Ensinnäkin SCA tuottaa kehittyneissä ohjelmistoissa paljon varoituksia, joihin käyttäjät eivät reagoi. On myös epäselvää, kuinka tehokkaasti SCA löytää oikeita bugeja. Siksi toteutimmekin tässä työssä tapaustutkimuksen Valuatumilla, jossa arvioimme ja parannamme FindBugsin tehokkuutta. Työn päätarkoitus on oppia hyödyntämään FindBugsia niin, että se antaisi mahdollisimman hyödyllistä ja välitöntä palautetta Valuatumin ohjelmistokehittäjille. Käytimme tutkimuksessa useita eri tapoja FindBugsin arviointiin. Ensinnäkin analysoimme, mitä korjattuja ohjelmistovirheitä FindBugs olisi voinut estää. Kirjoitimme myös omia bugi-ilmaisimia niille tärkeimmille virheille, joita FindBugs ei löytänyt. Lisäksi tutkimme FindBugsin löytämiä avoimia bugeja sekä esitimme erilaisia tapoja, miten hallita suuria määriä varoituksia. Tutkimustulokset viittaavat siihen, että FindBugs ei ole kovin tehokas löytämään korjattuja ohjelmistovirheitä. Arvioimme, että SCA:lla pystyisi löytämään 9 - 16% korjatuista bugeista. Kuitenkin vain 0 - 2 % raportoiduista ja 1 - 6 % raportoimattomista bugeista olisi voitu estää FindBugsilla. Avoimista FindBugsin löytämistä korkeimman prioriteetin varoituksista vain 18,5 % luokiteltiin oleellisiksi. FindBugs on kuitenkin mielestämme kustannustehokas työkalu bugien etsintään, koska sitä pystyy tehostamaan omilla bugi-ilmaisimilla ja sen avulla olemme löytäneet useita tärkeitä virheitä koodista

    Evaluating static analysis defect warnings on production software

    Full text link
    corecore