571,381 research outputs found
A Decade of Code Comment Quality Assessment: A Systematic Literature Review
Code comments are important artifacts in software systems and play a
paramount role in many software engineering (SE) tasks related to maintenance
and program comprehension. However, while it is widely accepted that high
quality matters in code comments just as it matters in source code, assessing
comment quality in practice is still an open problem. First and foremost, there
is no unique definition of quality when it comes to evaluating code comments.
The few existing studies on this topic rather focus on specific attributes of
quality that can be easily quantified and measured. Existing techniques and
corresponding tools may also focus on comments bound to a specific programming
language, and may only deal with comments with specific scopes and clear goals
(e.g., Javadoc comments at the method level, or in-body comments describing
TODOs to be addressed). In this paper, we present a Systematic Literature
Review (SLR) of the last decade of research in SE to answer the following
research questions: (i) What types of comments do researchers focus on when
assessing comment quality? (ii) What quality attributes (QAs) do they consider?
(iii) Which tools and techniques do they use to assess comment quality?, and
(iv) How do they evaluate their studies on comment quality assessment in
general? Our evaluation, based on the analysis of 2353 papers and the actual
review of 47 relevant ones, shows that (i) most studies and techniques focus on
comments in Java code, thus may not be generalizable to other languages, and
(ii) the analyzed studies focus on four main QAs of a total of 21 QAs
identified in the literature, with a clear predominance of checking consistency
between comments and the code. We observe that researchers rely on manual
assessment and specific heuristics rather than the automated assessment of the
comment quality attributes
On the Effect of Semantically Enriched Context Models on Software Modularization
Many of the existing approaches for program comprehension rely on the
linguistic information found in source code, such as identifier names and
comments. Semantic clustering is one such technique for modularization of the
system that relies on the informal semantics of the program, encoded in the
vocabulary used in the source code. Treating the source code as a collection of
tokens loses the semantic information embedded within the identifiers. We try
to overcome this problem by introducing context models for source code
identifiers to obtain a semantic kernel, which can be used for both deriving
the topics that run through the system as well as their clustering. In the
first model, we abstract an identifier to its type representation and build on
this notion of context to construct contextual vector representation of the
source code. The second notion of context is defined based on the flow of data
between identifiers to represent a module as a dependency graph where the nodes
correspond to identifiers and the edges represent the data dependencies between
pairs of identifiers. We have applied our approach to 10 medium-sized open
source Java projects, and show that by introducing contexts for identifiers,
the quality of the modularization of the software systems is improved. Both of
the context models give results that are superior to the plain vector
representation of documents. In some cases, the authoritativeness of
decompositions is improved by 67%. Furthermore, a more detailed evaluation of
our approach on JEdit, an open source editor, demonstrates that inferred topics
through performing topic analysis on the contextual representations are more
meaningful compared to the plain representation of the documents. The proposed
approach in introducing a context model for source code identifiers paves the
way for building tools that support developers in program comprehension tasks
such as application and domain concept location, software modularization and
topic analysis
Tools, processes and factors influencing of code review
Code review is the most effective quality assurance strategy in software development where reviewers aim to identify defects and improve the quality of source code of both commercial and open-source software. Ultimately, the main purpose of code review activities is to produce better software products. Review comments are the building blocks of code review. There are many approaches to conduct reviews and analysis source code such as pair programming, informal inspections, and formal inspections. Reviewers are responsible for providing comments and suggestions to improve the quality of the proposed source code modifications. This work aims to succinctly describe code review process, giving a framework of the tools and factors influencing code review to aid reviewers and authors in the code review stages and choose the suitable code review tool
A study of the impact of generative AI-based data augmentation on software metadata classification
This paper presents the system submitted by the team from IIT(ISM) Dhanbad in
FIRE IRSE 2023 shared task 1 on the automatic usefulness prediction of
code-comment pairs as well as the impact of Large Language Model(LLM) generated
data on original base data towards an associated source code. We have developed
a framework where we train a machine learning-based model using the neural
contextual representations of the comments and their corresponding codes to
predict the usefulness of code-comments pair and performance analysis with
LLM-generated data with base data. In the official assessment, our system
achieves a 4% increase in F1-score from baseline and the quality of generated
data
An Empirical Study of Self-Admitted Technical Debt in Machine Learning Software
The emergence of open-source ML libraries such as TensorFlow and Google Auto
ML has enabled developers to harness state-of-the-art ML algorithms with
minimal overhead. However, during this accelerated ML development process, said
developers may often make sub-optimal design and implementation decisions,
leading to the introduction of technical debt that, if not addressed promptly,
can have a significant impact on the quality of the ML-based software.
Developers frequently acknowledge these sub-optimal design and development
choices through code comments during software development. These comments,
which often highlight areas requiring additional work or refinement in the
future, are known as self-admitted technical debt (SATD). This paper aims to
investigate SATD in ML code by analyzing 318 open-source ML projects across
five domains, along with 318 non-ML projects. We detected SATD in source code
comments throughout the different project snapshots, conducted a manual
analysis of the identified SATD sample to comprehend the nature of technical
debt in the ML code, and performed a survival analysis of the SATD to
understand the evolution of such debts. We observed: i) Machine learning
projects have a median percentage of SATD that is twice the median percentage
of SATD in non-machine learning projects. ii) ML pipeline components for data
preprocessing and model generation logic are more susceptible to debt than
model validation and deployment components. iii) SATDs appear in ML projects
earlier in the development process compared to non-ML projects. iv)
Long-lasting SATDs are typically introduced during extensive code changes that
span multiple files exhibiting low complexity
SATDBailiff- Mining and Tracking Self-Admitted Technical Debt
Self-Admitted Technical Debt (SATD) is a metaphorical concept to describe the self-documented addition of technical debt to a software project in the form of source code comments. SATD can linger in projects and degrade source-code quality, but it can also be more visible than unintentionally added or undocumented technical debt. Understanding the implications of adding SATD to a software project is important because developers can benefit from a better understanding of the quality trade-offs they are making. However, empirical studies, analyzing the survivability and removal of SATD comments, are challenged by potential code changes or SATD comment updates that may interfere with properly tracking their appearance, existence, and removal. In this paper, we propose SATDBailiff, a tool that uses an existing state-of-the-art SATD detection tool, to identify SATD in method comments, then properly track their lifespan. SATDBailiff is given as input links to open source projects, and its output is a list of all identified SATDs, and for each detected SATD, SATDBailiff reports all its associated changes, including any updates to its text, all the way to reporting its removal. The goal of SATDBailiff is to aid researchers and practitioners in better tracking SATDs instances, and providing them with a reliable tool that can be easily extended. SATDBailiff was validated using a dataset of previously detected and manually validated SATD instances. SATDBailiff is publicly available as an open source, along with the manual analysis of SATD instances associated with its validation, on the project website
Investigation on Self-Admitted Technical Debt in Open-Source Blockchain Projects
Technical debt refers to decisions made during the design and development of software that postpone the resolution of technical problems or the enhancement of the software's features to a later date. If not properly managed, technical debt can put long-term software quality and maintainability at risk. Self-admitted technical debt is defined as the addition of specific comments to source code as a result of conscious and deliberate decisions to accumulate technical debt. In this paper, we will look at the presence of self-admitted technical debt in open-source blockchain projects, which are characterized by the use of a relatively novel technology and the need to generate trust. The self-admitted technical debt was analyzed using NLP techniques for the classification of comments extracted from the source code of ten projects chosen based on capitalization and popularity. The analysis of self-admitted technical debt in blockchain projects was compared with the results of previous non-blockchain open-source project analyses. The findings show that self-admitted design technical debt outnumbers requirement technical debt in blockchain projects. The analysis discovered that some projects had a low percentage of self-admitted technical debt in the comments but a high percentage of source code files with debt. In addition, self-admitted technical debt is on average more prevalent in blockchain projects and more equally distributed than in reference Java projects.If not managed, the relatively high presence of detected technical debt in blockchain projects could represent a threat to the needed trust between the blockchain system and the users. Blockchain projects development teams could benefit from self-admitted technical debt detection for targeted technical debt management
A decade of code comment quality assessment : a systematic literature review
Code comments are important artifacts in software systems and play a paramount role in many software engineering (SE) tasks related to maintenance and program comprehension. However, while it is widely accepted that high quality matters in code comments just as it matters in source code, assessing comment quality in practice is still an open problem. First and foremost, there is no unique definition of quality when it comes to evaluating code comments. The few existing studies on this topic rather focus on specific attributes of quality that can be easily quantified and measured. Existing techniques and corresponding tools may also focus on comments bound to a specific programming language, and may only deal with comments with specific scopes and clear goals (e.g., Javadoc comments at the method level, or in-body comments describing TODOs to be addressed).
In this paper, we present a Systematic Literature Review (SLR) of the last decade of research in SE to answer the following research questions: (i) What types of comments do researchers focus on when assessing comment quality? (ii) What quality attributes (QAs) do they consider? (iii) Which tools and techniques do they use to assess comment quality?, and (iv) How do they evaluate their studies on comment quality assessment in general?
Our evaluation, based on the analysis of 2353 papers and the actual review of 47 relevant ones, shows that (i) most studies and techniques focus on comments in Java code, thus may not be generalizable to other languages, and (ii) the analyzed studies focus on four main QAs of a total of 21 QAs identified in the literature, with a clear predominance of checking consistency between comments and the code. We observe that researchers rely on manual assessment and specific heuristics rather than the automated assessment of the comment quality attributes, with evaluations often involving surveys of students and the authors of the original studies but rarely professional developers
- …