4 research outputs found
Boosting API Recommendation with Implicit Feedback
Developers often need to use appropriate APIs to program efficiently, but it
is usually a difficult task to identify the exact one they need from a vast of
candidates. To ease the burden, a multitude of API recommendation approaches
have been proposed. However, most of the currently available API recommenders
do not support the effective integration of users' feedback into the
recommendation loop. In this paper, we propose a framework, BRAID (Boosting
RecommendAtion with Implicit FeeDback), which leverages learning-to-rank and
active learning techniques to boost recommendation performance. By exploiting
users' feedback information, we train a learning-to-rank model to re-rank the
recommendation results. In addition, we speed up the feedback learning process
with active learning. Existing query-based API recommendation approaches can be
plugged into BRAID. We select three state-of-the-art API recommendation
approaches as baselines to demonstrate the performance enhancement of BRAID
measured by Hit@k (Top-k), MAP, and MRR. Empirical experiments show that, with
acceptable overheads, the recommendation performance improves steadily and
substantially with the increasing percentage of feedback data, comparing with
the baselines.Comment: 15 pages, 4 figure
Supporting Source Code Search with Context-Aware and Semantics-Driven Query Reformulation
Software bugs and failures cost trillions of dollars every year, and could even lead to deadly accidents (e.g., Therac-25 accident). During maintenance, software developers fix numerous bugs and implement hundreds of new features by making necessary changes to the existing software code. Once an issue report (e.g., bug report, change request) is assigned to a developer, she chooses a few important keywords from the report as a search query, and then attempts to find out the exact locations in the software code that need to be either repaired or enhanced. As a part of this maintenance, developers also often select ad hoc queries on the fly, and attempt to locate the reusable code from the Internet that could assist them either in bug fixing or in feature implementation. Unfortunately, even the experienced developers often fail to construct the right search queries. Even if the developers come up with a few ad hoc queries, most of them require frequent modifications which cost significant development time and efforts. Thus, construction of an appropriate query for localizing the software bugs, programming concepts or even the reusable code is a major challenge. In this thesis, we overcome this query construction challenge with six studies, and develop a novel, effective code search solution (BugDoctor) that assists the developers in localizing the software code of interest (e.g., bugs, concepts and reusable code) during software maintenance. In particular, we reformulate a given search query (1) by designing novel keyword selection algorithms (e.g., CodeRank) that outperform the traditional alternatives (e.g., TF-IDF), (2) by leveraging the bug report quality paradigm and source document structures which were previously overlooked and (3) by exploiting the crowd knowledge and word semantics derived from Stack Overflow Q&A site, which were previously untapped. Our experiment using 5000+ search queries (bug reports, change requests, and ad hoc queries) suggests that our proposed approach can improve the given queries significantly through automated query reformulations. Comparison with 10+ existing studies on bug localization, concept location and Internet-scale code search suggests that our approach can outperform the state-of-the-art approaches with a significant margin