9,400 research outputs found

    Automatically assessing and improving code readability and understandability

    Get PDF

    A Decade of Code Comment Quality Assessment: A Systematic Literature Review

    Get PDF
    Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed). In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general? Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes

    An empirical evaluation of the “cognitive complexity” measure as a predictor of code understandability

    Get PDF
    Background: Code that is difficult to understand is also difficult to inspect and maintain and ultimately causes increased costs. Therefore, it would be greatly beneficial to have source code measures that are related to code understandability. Many ‘‘traditional’’ source code measures, including for instance Lines of Code and McCabe’s Cyclomatic Complexity, have been used to identify hard-to-understand code. In addition, the ‘‘Cognitive Complexity’’ measure was introduced in 2018 with the specific goal of improving the ability to evaluate code understandability. Aims: The goals of this paper are to assess whether (1) ‘‘Cognitive Complexity’’ is better correlated with code understandability than traditional measures, and (2) the availability of the ‘‘Cognitive Complexity’’ measure improves the performance (i.e., the accuracy) of code understandability prediction models. Method: We carried out an empirical study, in which we reused code understandability measures used in several previous studies. We first built Support Vector Regression models of understandability vs. code measures, and we then compared the performance of models that use ‘‘Cognitive Complexity’’ against the performance of models that do not. Results: ‘‘Cognitive Complexity’’ appears to be correlated to code understandability approximately as much as traditional measures, and the performance of models that use ‘‘Cognitive Complexity’’ is extremely close to the performance of models that use only traditional measures. Conclusions: The ‘‘Cognitive Complexity’’ measure does not appear to fulfill the promise of being a significant improvement over previously proposed measures, as far as code understandability prediction is concerned

    A Right to Access Implies A Right to Know: An Open Online Platform for Research on the Readability of Law

    Get PDF
    The widespread availability of legal materials online has opened the law to a new and greatly expanded readership. These new readers need the law to be readable by them when they encounter it. However, the available empirical research supports a conclusion that legislation is difficult to read if not incomprehensible to most citizens. We review approaches that have been used to measure the readability of text including readability metrics, cloze testing and application of machine learning. We report the creation and testing of an open online platform for readability research. This platform is made available to researchers interested in undertaking research on the readability of legal materials. To demonstrate the capabilities ofthe platform, we report its initial application to a corpus of legislation. Linguistic characteristics are extracted using the platform and then used as input features for machine learning using the Weka package. Wide divergences are found between sentences in a corpus of legislation and those in a corpus of graded reading material or in the Brown corpus (a balanced corpus of English written genres). Readability metrics are found to be of little value in classifying sentences by grade reading level (noting that such metrics were not designed to be used with isolated sentences)

    Assessment of Source Code Obfuscation Techniques

    Get PDF
    Obfuscation techniques are a general category of software protections widely adopted to prevent malicious tampering of the code by making applications more difficult to understand and thus harder to modify. Obfuscation techniques are divided in code and data obfuscation, depending on the protected asset. While preliminary empirical studies have been conducted to determine the impact of code obfuscation, our work aims at assessing the effectiveness and efficiency in preventing attacks of a specific data obfuscation technique - VarMerge. We conducted an experiment with student participants performing two attack tasks on clear and obfuscated versions of two applications written in C. The experiment showed a significant effect of data obfuscation on both the time required to complete and the successful attack efficiency. An application with VarMerge reduces by six times the number of successful attacks per unit of time. This outcome provides a practical clue that can be used when applying software protections based on data obfuscation.Comment: Post-print, SCAM 201

    A Systematic Mapping Study of Code Quality in Education -- with Complete Bibliography

    Full text link
    While functionality and correctness of code has traditionally been the main focus of computing educators, quality aspects of code are getting increasingly more attention. High-quality code contributes to the maintainability of software systems, and should therefore be a central aspect of computing education. We have conducted a systematic mapping study to give a broad overview of the research conducted in the field of code quality in an educational context. The study investigates paper characteristics, topics, research methods, and the targeted programming languages. We found 195 publications (1976-2022) on the topic in multiple databases, which we systematically coded to answer the research questions. This paper reports on the results and identifies developments, trends, and new opportunities for research in the field of code quality in computing education

    The effects of change decomposition on code review -- a controlled experiment

    Get PDF
    Background: Code review is a cognitively demanding and time-consuming process. Previous qualitative studies hinted at how decomposing change sets into multiple yet internally coherent ones would improve the reviewing process. So far, literature provided no quantitative analysis of this hypothesis. Aims: (1) Quantitatively measure the effects of change decomposition on the outcome of code review (in terms of number of found defects, wrongly reported issues, suggested improvements, time, and understanding); (2) Qualitatively analyze how subjects approach the review and navigate the code, building knowledge and addressing existing issues, in large vs. decomposed changes. Method: Controlled experiment using the pull-based development model involving 28 software developers among professionals and graduate students. Results: Change decomposition leads to fewer wrongly reported issues, influences how subjects approach and conduct the review activity (by increasing context-seeking), yet impacts neither understanding the change rationale nor the number of found defects. Conclusions: Change decomposition reduces the noise for subsequent data analyses but also significantly supports the tasks of the developers in charge of reviewing the changes. As such, commits belonging to different concepts should be separated, adopting this as a best practice in software engineering

    How do Developers Improve Code Readability? An Empirical Study of Pull Requests

    Full text link
    Readability models and tools have been proposed to measure the effort to read code. However, these models are not completely able to capture the quality improvements in code as perceived by developers. To investigate possible features for new readability models and production-ready tools, we aim to better understand the types of readability improvements performed by developers when actually improving code readability, and identify discrepancies between suggestions of automatic static tools and the actual improvements performed by developers. We collected 370 code readability improvements from 284 Merged Pull Requests (PRs) under 109 GitHub repositories and produce a catalog with 26 different types of code readability improvements, where in most of the scenarios, the developers improved the code readability to be more intuitive, modular, and less verbose. Surprisingly, SonarQube only detected 26 out of the 370 code readability improvements. This suggests that some of the catalog produced has not yet been addressed by SonarQube rules, highlighting the potential for improvement in Automatic static analysis tools (ASAT) code readability rules as they are perceived by developers

    An Empirical Validation of Cognitive Complexity as a Measure of Source Code Understandability

    Full text link
    Background: Developers spend a lot of their time on understanding source code. Static code analysis tools can draw attention to code that is difficult for developers to understand. However, most of the findings are based on non-validated metrics, which can lead to confusion and code, that is hard to understand, not being identified. Aims: In this work, we validate a metric called Cognitive Complexity which was explicitly designed to measure code understandability and which is already widely used due to its integration in well-known static code analysis tools. Method: We conducted a systematic literature search to obtain data sets from studies which measured code understandability. This way we obtained about 24,000 understandability evaluations of 427 code snippets. We calculated the correlations of these measurements with the corresponding metric values and statistically summarized the correlation coefficients through a meta-analysis. Results: Cognitive Complexity positively correlates with comprehension time and subjective ratings of understandability. The metric showed mixed results for the correlation with the correctness of comprehension tasks and with physiological measures. Conclusions: It is the first validated and solely code-based metric which is able to reflect at least some aspects of code understandability. Moreover, due to its methodology, this work shows that code understanding is currently measured in many different ways, which we also do not know how they are related. This makes it difficult to compare the results of individual studies as well as to develop a metric that measures code understanding in all its facets.Comment: 12 pages. To be published at ESEM '20: ACM / IEEE International Symposium on Empirical Software Engineering and Measuremen
    corecore