53 research outputs found

    Using Automatic Static Analysis to Identify Technical Debt

    Get PDF
    The technical debt (TD) metaphor describes a tradeoff between short-term and long-term goals in software development. Developers, in such situations, accept compromises in one dimension (e.g. maintainability) to meet an urgent demand in another dimension (e.g. delivering a release on time). Since TD produces interests in terms of time spent to correct the code and accomplish quality goals, accumulation of TD in software systems is dangerous because it could lead to more difficult and expensive maintenance. The research presented in this paper is focused on the usage of automatic static analysis to identify Technical Debt at code level with respect to different quality dimensions. The methodological approach is that of Empirical Software Engineering and both past and current achieved results are presented, focusing on functionality, efficiency and maintainabilit

    Organizing the Technical Debt Landscape

    Get PDF
    To date, several methods and tools for detecting source code and design anomalies have been developed. While each method focuses on identifying certain classes of source code anomalies that potentially relate to technical debt (TD), the overlaps and gaps among these classes and TD have not been rigorously demonstrated. We propose to construct a seminal technical debt landscape as a way to visualize and organize research on the subjec

    Quantitative Assessment of the Impact of Automatic Static Analysis Issues on Time Efficiency

    Get PDF
    Background: Automatic Static Analysis (ASA) tools analyze source code and look for code patterns (aka smells) that might cause defective behavior or might degrade other dimensions of software quality, e.g. efficiency. There are many potentially negative code patterns, and ASA tools typically report a huge list of them even in small programs. Moreover, so far, little evidence is available about the negative impact on performance of code patterns identified by such tools. A consequence is that programmers cannot appreciate the benefits of ASA tools and tend not to include them in their workflow. Aims: Quantitatively assess the impact of issues signaled by ASA tools on time efficiency. Method: We select 20 issues and for each of them we set up two source code fragments: one containing the issue and the corresponding refactored version, functionally identical but without the issue. We set up three different platforms, isolated from network and other user programs, then we execute the code fragments, and measure the execution time of both code versions. Results: We find that eleven issues have an actual negative impact on performance. We also compute for each issue an estimation for the delay provoked by a single execution. Conclusions: We produce a set of issues with a verified negative impact on performance. They can be checked easily with an analysis tool and code can be refactored to obtain a provably more efficient code. We also provide the estimated delay cost of each issue in the environments where we conduct the tests. These results can be improved with the help of other researchers: repeating the tests in several platforms would make it possible to build up a wider benchmar

    Мониторинг дефектов проектирования объектно-ориентированного программного обеспечения

    Get PDF
    Дефекты проектирования вносятся в программное обеспечение в процессе его сопровождения в результате невыполнения правил проектирования. Несмотря на то, что дефекты проектирования негативно влияют на сопровождаемость программного обеспечения, и должны устраняться, в некоторых случаях их внесение является лучшим проектным решением проблемы. В работе описываются графические представления, предназначенные для облегчения мониторинга дефектов проектирования. Определен каталог категорий, объединяющих дефекты проектирования по общности их истории, позволяющий упростить интерпретацию графических представлений. Показано что мониторинг с помощью визуализации позволяет быстро выделить прогрессирующие дефекты проектирования.Design flaws are introduced into software during its maintenance as a result of failure to comply with design rules. Despite the fact that defects in the design adversely affect the maintainability of software, and should be eliminated, in some cases, their introduction is the best design solution. The paper describes the graphical representations designed to facilitate the monitoring of design flaws. Catalog of categories is defined which combine defects in the design by their shared history and allows simplifying the interpretation of graphic representations. It is shown that by monitoring using visualization one can quickly identify progressing design flaws

    Heuristics for Discovering Architectural Violations

    Get PDF
    International audienceSoftware architecture conformance is a key software quality control activity that aims to reveal the progressive gap normally observed between concrete and planned software architectures. In this paper, we present ArchLint, a lightweight approach for architecture conformance based on a combination of static and historical source code analysis. For this purpose, ArchLint relies on four heuristics for detecting both absences and divergences in source code based architectures. We applied ArchLint in an industrial-strength system and as a result we detected 119 architectural violations, with an overall precision of 46.7% and a recall of 96.2%, for divergences. We also evaluated ArchLint with four open-source systems, used in an independent study on reflexion models. In this second study, ArchLint achieved precision results ranging from 57.1% to 89.4%

    Tracking Your Changes: A Language-Independent Approach

    Full text link

    Selective Regression Testing based on Big Data: Comparing Feature Extraction Techniques

    Get PDF
    Regression testing is a necessary activity in continuous integration (CI) since it provides confidence that modified parts of the system are correct at each integration cycle. CI provides large volumes of data which can be used to support regression testing activities. By using machine learning, patterns about faulty changes in the modified program can be induced, allowing test orchestrators to make inferences about test cases that need to be executed at each CI cycle. However, one challenge in using learning models lies in finding a suitable way for characterizing source code changes and preserving important information. In this paper, we empirically evaluate the effect of three feature extraction algorithms on the performance of an existing ML-based selective regression testing technique. We designed and performed an experiment to empirically investigate the effect of Bag of Words (BoW), Word Embeddings (WE), and content-based feature extraction (CBF). We used stratified cross validation on the space of features generated by the three FE techniques and evaluated the performance of three machine learning models using the precision and recall metrics. The results from this experiment showed a significant difference between the models\u27 precision and recall scores, suggesting that the BoW-fed model outperforms the other two models with respect to precision, whereas a CBF-fed model outperforms the rest with respect to recall
    corecore