158 research outputs found
Employing Deep Learning and Structured Information Retrieval to Answer Clarification Questions on Bug Reports
Software bug reports reported on bug-tracking systems often lack crucial
information for the developers to promptly resolve them, costing companies
billions of dollars. There has been significant research on effectively
eliciting information from bug reporters in bug tracking systems using
different templates that bug reporters need to use. However, the need for
asking follow-up questions persists. Recent studies propose techniques to
suggest these follow-up questions to help developers obtain the missing
details, but there has been little research on answering these follow up
questions, which are often unanswered. In this paper, we propose a novel
approach that uses CodeT5 in combination with Lucene, an information retrieval
technique that leverages the relevance of different bug reports, their
components, and follow-up questions to recommend answers. These top-performing
answers, along with their bug report, serve as additional context apart from
the deficient bug report to the deep learning model for generating an answer.
We evaluate our recommended answers with the manually annotated answers using
similarity metrics like Normalized Smooth BLEU Score, METEOR, Word Mover's
Distance, and Semantic Similarity. We achieve a BLEU Score of up to 34 and
Semantic Similarity of up to 64 which shows that the answers generated are
understandable and good according to Google's standard and can outperform
multiple baselines.Comment: Fixed formatting and typographical error
A Comparative Study of Text Embedding Models for Semantic Text Similarity in Bug Reports
Bug reports are an essential aspect of software development, and it is
crucial to identify and resolve them quickly to ensure the consistent
functioning of software systems. Retrieving similar bug reports from an
existing database can help reduce the time and effort required to resolve bugs.
In this paper, we compared the effectiveness of semantic textual similarity
methods for retrieving similar bug reports based on a similarity score. We
explored several embedding models such as TF-IDF (Baseline), FastText, Gensim,
BERT, and ADA. We used the Software Defects Data containing bug reports for
various software projects to evaluate the performance of these models. Our
experimental results showed that BERT generally outperformed the rest of the
models regarding recall, followed by ADA, Gensim, FastText, and TFIDF. Our
study provides insights into the effectiveness of different embedding methods
for retrieving similar bug reports and highlights the impact of selecting the
appropriate one for this task. Our code is available on GitHub.Comment: 7 Page
Supporting Source Code Search with Context-Aware and Semantics-Driven Query Reformulation
Software bugs and failures cost trillions of dollars every year, and could even lead to deadly accidents (e.g., Therac-25 accident). During maintenance, software developers fix numerous bugs and implement hundreds of new features by making necessary changes to the existing software code. Once an issue report (e.g., bug report, change request) is assigned to a developer, she chooses a few important keywords from the report as a search query, and then attempts to find out the exact locations in the software code that need to be either repaired or enhanced. As a part of this maintenance, developers also often select ad hoc queries on the fly, and attempt to locate the reusable code from the Internet that could assist them either in bug fixing or in feature implementation. Unfortunately, even the experienced developers often fail to construct the right search queries. Even if the developers come up with a few ad hoc queries, most of them require frequent modifications which cost significant development time and efforts. Thus, construction of an appropriate query for localizing the software bugs, programming concepts or even the reusable code is a major challenge. In this thesis, we overcome this query construction challenge with six studies, and develop a novel, effective code search solution (BugDoctor) that assists the developers in localizing the software code of interest (e.g., bugs, concepts and reusable code) during software maintenance. In particular, we reformulate a given search query (1) by designing novel keyword selection algorithms (e.g., CodeRank) that outperform the traditional alternatives (e.g., TF-IDF), (2) by leveraging the bug report quality paradigm and source document structures which were previously overlooked and (3) by exploiting the crowd knowledge and word semantics derived from Stack Overflow Q&A site, which were previously untapped. Our experiment using 5000+ search queries (bug reports, change requests, and ad hoc queries) suggests that our proposed approach can improve the given queries significantly through automated query reformulations. Comparison with 10+ existing studies on bug localization, concept location and Internet-scale code search suggests that our approach can outperform the state-of-the-art approaches with a significant margin
Changeset-based Retrieval of Source Code Artifacts for Bug Localization
Modern software development is extremely collaborative and agile, with unprecedented speed and scale of activity. Popular trends like continuous delivery and continuous deployment aim at building, fixing, and releasing software with greater speed and frequency. Bug localization, which aims to automatically localize bug reports to relevant software artifacts, has the potential to improve software developer efficiency by reducing the time spent on debugging and examining code. To date, this problem has been primarily addressed by applying information retrieval techniques based on static code elements, which are intrinsically unable to reflect how software evolves over time. Furthermore, as prior approaches frequently rely on exact term matching to measure relatedness between a bug report and a software artifact, they are prone to be affected by the lexical gap that exists between natural and programming language.
This thesis explores using software changes (i.e., changesets), instead of static code elements, as the primary data unit to construct an information retrieval model toward bug localization. Changesets, which represent the differences between two consecutive versions of the source code, provide a natural representation of a software change, and allow to capture both the semantics of the source code, and the semantics of the code modification. To bridge the lexical gap between source code and natural language, this thesis investigates using topic modeling and deep learning architectures that enable creating semantically rich data representation with the goal of identifying latent connection between bug reports and source code. To show the feasibility of the proposed approaches, this thesis also investigates practical aspects related to using a bug localization tool, such retrieval delay and training data availability.
The results indicate that the proposed techniques effectively leverage historical data about bugs and their related source code components to improve retrieval accuracy, especially for bug reports that are expressed in natural language, with little to no explicit code references. Further improvement in accuracy is observed when the size of the training dataset is increased through data augmentation and data balancing strategies proposed in this thesis, although depending on the model architecture the magnitude of the improvement varies. In terms of retrieval delay, the results indicate that the proposed deep learning architecture significantly outperforms prior work, and scales up with respect to search space size
A Systematic Review of Automated Query Reformulations in Source Code Search
Fixing software bugs and adding new features are two of the major maintenance
tasks. Software bugs and features are reported as change requests. Developers
consult these requests and often choose a few keywords from them as an ad hoc
query. Then they execute the query with a search engine to find the exact
locations within software code that need to be changed. Unfortunately, even
experienced developers often fail to choose appropriate queries, which leads to
costly trials and errors during a code search. Over the years, many studies
attempt to reformulate the ad hoc queries from developers to support them. In
this systematic literature review, we carefully select 70 primary studies on
query reformulations from 2,970 candidate studies, perform an in-depth
qualitative analysis (e.g., Grounded Theory), and then answer seven research
questions with major findings. First, to date, eight major methodologies (e.g.,
term weighting, term co-occurrence analysis, thesaurus lookup) have been
adopted to reformulate queries. Second, the existing studies suffer from
several major limitations (e.g., lack of generalizability, vocabulary mismatch
problem, subjective bias) that might prevent their wide adoption. Finally, we
discuss the best practices and future opportunities to advance the state of
research in search query reformulations.Comment: 81 pages, accepted at TOSE
Cupid: Leveraging ChatGPT for More Accurate Duplicate Bug Report Detection
Duplicate bug report detection (DBRD) is a long-standing challenge in both
academia and industry. Over the past decades, researchers have proposed various
approaches to detect duplicate bug reports more accurately. With the recent
advancement of deep learning, researchers have also proposed several approaches
that leverage deep learning models to detect duplicate bug reports. A recent
benchmarking study on DBRD also reveals that the performance of deep
learning-based approaches is not always better than the traditional approaches.
However, traditional approaches have limitations, e.g., they are usually based
on the bag-of-words model, which cannot capture the semantics of bug reports.
To address these aforementioned challenges, we seek to leverage
state-of-the-art large language model to improve the performance of the
traditional DBRD approach.
In this paper, we propose an approach called Cupid, which combines the
best-performing traditional DBRD approach REP with the state-of-the-art large
language model ChatGPT. Specifically, we first leverage ChatGPT under the
zero-shot setting to get essential information on bug reports. We then use the
essential information as the input of REP to detect duplicate bug reports. We
conducted an evaluation on comparing Cupid with three existing approaches on
three datasets. The experimental results show that Cupid achieves new
state-of-the-art results, reaching Recall Rate@10 scores ranging from 0.59 to
0.67 across all the datasets analyzed. Our work highlights the potential of
combining large language models to improve the performance of software
engineering tasks.Comment: Work in progres
Intelligent Software Tooling For Improving Software Development
Software has eaten the world with many of the necessities and quality of life services people use requiring software. Therefore, tools that improve the software development experience can have a significant impact on the world such as generating code and test cases, detecting bugs, question and answering, etc. The success of Deep Learning (DL) over the past decade has shown huge advancements in automation across many domains, including Software Development processes. One of the main reasons behind this success is the availability of large datasets such as open-source code available through GitHub or image datasets of mobile Graphical User Interfaces (GUIs) with RICO and ReDRAW to be trained on. Therefore, the central research question my dissertation explores is: In what ways can the software development process be improved through leveraging DL techniques on the vast amounts of unstructured software engineering artifacts? We coin the approaches that leverage DL to automate or augment various software development task as Intelligent Software Tools. To guide our research of these intelligent software tools, we performed a systematic literature review to understand the current landscape of research on applying DL techniques to software tasks and any gaps that exist. From this literature review, we found code generation to be one of the most studied tasks with other tasks and artifacts such as impact analysis or tasks involving images and videos to be understudied. Therefore, we set out to explore the application of DL to these understudied tasks and artifacts as well as the limitations of DL models under the well studied task code completion, a subfield in code generation. Specifically, we developed a tool for automatically detecting duplicate mobile bug reports from user submitted videos. We used the popular Convolutional Neural Network (CNN) to learn important features from a large collection of mobile screenshots. Using this model, we could then compute similarity between a newly submitted bug report and existing ones to produce a ranked list of duplicate candidates that can be reviewed by a developer. Next, we explored impact analysis, a critical software maintenance task that identifies potential adverse effects of a given code change on the larger software system. To this end, we created Athena, a novel approach to impact analysis that integrates knowledge of a software system through its call-graph along with high-level representations of the code inside the system to improve impact analysis performance. Lastly, we explored the task of code completion, which has seen heavy interest from industry and academia. Specifically, we explored various methods that modify the positional encoding scheme of the Transformer architecture for allowing these models to incorporate longer sequences of tokens when predicting completions than seen during their training as this can significantly improve training times
Do Pre-trained Language Models Indeed Understand Software Engineering Tasks?
Artificial intelligence (AI) for software engineering (SE) tasks has recently
achieved promising performance. In this paper, we investigate to what extent
the pre-trained language model truly understands those SE tasks such as code
search, code summarization, etc. We conduct a comprehensive empirical study on
a board set of AI for SE (AI4SE) tasks by feeding them with variant inputs: 1)
with various masking rates and 2) with sufficient input subset method. Then,
the trained models are evaluated on different SE tasks, including code search,
code summarization, and duplicate bug report detection. Our experimental
results show that pre-trained language models are insensitive to the given
input, thus they achieve similar performance in these three SE tasks. We refer
to this phenomenon as overinterpretation, where a model confidently makes a
decision without salient features, or where a model finds some irrelevant
relationships between the final decision and the dataset. Our study
investigates two approaches to mitigate the overinterpretation phenomenon:
whole word mask strategy and ensembling. To the best of our knowledge, we are
the first to reveal this overinterpretation phenomenon to the AI4SE community,
which is an important reminder for researchers to design the input for the
models and calls for necessary future work in understanding and implementing
AI4SE tasks.Comment: arXiv admin note: text overlap with arXiv:2202.08005 by other author
- …