2,561 research outputs found
DeepSoft: A vision for a deep model of software
Although software analytics has experienced rapid growth as a research area,
it has not yet reached its full potential for wide industrial adoption. Most of
the existing work in software analytics still relies heavily on costly manual
feature engineering processes, and they mainly address the traditional
classification problems, as opposed to predicting future events. We present a
vision for \emph{DeepSoft}, an \emph{end-to-end} generic framework for modeling
software and its development process to predict future risks and recommend
interventions. DeepSoft, partly inspired by human memory, is built upon the
powerful deep learning-based Long Short Term Memory architecture that is
capable of learning long-term temporal dependencies that occur in software
evolution. Such deep learned patterns of software can be used to address a
range of challenging problems such as code and task recommendation and
prediction. DeepSoft provides a new approach for research into modeling of
source code, risk prediction and mitigation, developer modeling, and
automatically generating code patches from bug reports.Comment: FSE 201
Large Language Models for Software Engineering: A Systematic Literature Review
Large Language Models (LLMs) have significantly impacted numerous domains,
notably including Software Engineering (SE). Nevertheless, a well-rounded
understanding of the application, effects, and possible limitations of LLMs
within SE is still in its early stages. To bridge this gap, our systematic
literature review takes a deep dive into the intersection of LLMs and SE, with
a particular focus on understanding how LLMs can be exploited in SE to optimize
processes and outcomes. Through a comprehensive review approach, we collect and
analyze a total of 229 research papers from 2017 to 2023 to answer four key
research questions (RQs). In RQ1, we categorize and provide a comparative
analysis of different LLMs that have been employed in SE tasks, laying out
their distinctive features and uses. For RQ2, we detail the methods involved in
data collection, preprocessing, and application in this realm, shedding light
on the critical role of robust, well-curated datasets for successful LLM
implementation. RQ3 allows us to examine the specific SE tasks where LLMs have
shown remarkable success, illuminating their practical contributions to the
field. Finally, RQ4 investigates the strategies employed to optimize and
evaluate the performance of LLMs in SE, as well as the common techniques
related to prompt optimization. Armed with insights drawn from addressing the
aforementioned RQs, we sketch a picture of the current state-of-the-art,
pinpointing trends, identifying gaps in existing research, and flagging
promising areas for future study
Cupid: Leveraging ChatGPT for More Accurate Duplicate Bug Report Detection
Duplicate bug report detection (DBRD) is a long-standing challenge in both
academia and industry. Over the past decades, researchers have proposed various
approaches to detect duplicate bug reports more accurately. With the recent
advancement of deep learning, researchers have also proposed several approaches
that leverage deep learning models to detect duplicate bug reports. A recent
benchmarking study on DBRD also reveals that the performance of deep
learning-based approaches is not always better than the traditional approaches.
However, traditional approaches have limitations, e.g., they are usually based
on the bag-of-words model, which cannot capture the semantics of bug reports.
To address these aforementioned challenges, we seek to leverage
state-of-the-art large language model to improve the performance of the
traditional DBRD approach.
In this paper, we propose an approach called Cupid, which combines the
best-performing traditional DBRD approach REP with the state-of-the-art large
language model ChatGPT. Specifically, we first leverage ChatGPT under the
zero-shot setting to get essential information on bug reports. We then use the
essential information as the input of REP to detect duplicate bug reports. We
conducted an evaluation on comparing Cupid with three existing approaches on
three datasets. The experimental results show that Cupid achieves new
state-of-the-art results, reaching Recall Rate@10 scores ranging from 0.59 to
0.67 across all the datasets analyzed. Our work highlights the potential of
combining large language models to improve the performance of software
engineering tasks.Comment: Work in progres
- …