4,403 research outputs found
NCBO Ontology Recommender 2.0: An Enhanced Approach for Biomedical Ontology Recommendation
Biomedical researchers use ontologies to annotate their data with ontology
terms, enabling better data integration and interoperability. However, the
number, variety and complexity of current biomedical ontologies make it
cumbersome for researchers to determine which ones to reuse for their specific
needs. To overcome this problem, in 2010 the National Center for Biomedical
Ontology (NCBO) released the Ontology Recommender, which is a service that
receives a biomedical text corpus or a list of keywords and suggests ontologies
appropriate for referencing the indicated terms. We developed a new version of
the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a new
recommendation approach that evaluates the relevance of an ontology to
biomedical text data according to four criteria: (1) the extent to which the
ontology covers the input data; (2) the acceptance of the ontology in the
biomedical community; (3) the level of detail of the ontology classes that
cover the input data; and (4) the specialization of the ontology to the domain
of the input data. Our evaluation shows that the enhanced recommender provides
higher quality suggestions than the original approach, providing better
coverage of the input data, more detailed information about their concepts,
increased specialization for the domain of the input data, and greater
acceptance and use in the community. In addition, it provides users with more
explanatory information, along with suggestions of not only individual
ontologies but also groups of ontologies. It also can be customized to fit the
needs of different scenarios. Ontology Recommender 2.0 combines the strengths
of its predecessor with a range of adjustments and new features that improve
its reliability and usefulness. Ontology Recommender 2.0 recommends over 500
biomedical ontologies from the NCBO BioPortal platform, where it is openly
available.Comment: 29 pages, 8 figures, 11 table
Improving Code Example Recommendations on Informal Documentation Using BERT and Query-Aware LSH: A Comparative Study
Our research investigates the recommendation of code examples to aid software
developers, a practice that saves developers significant time by providing
ready-to-use code snippets. The focus of our study is Stack Overflow, a
commonly used resource for coding discussions and solutions, particularly in
the context of the Java programming language.
We applied BERT, a powerful Large Language Model (LLM) that enables us to
transform code examples into numerical vectors by extracting their semantic
information. Once these numerical representations are prepared, we identify
Approximate Nearest Neighbors (ANN) using Locality-Sensitive Hashing (LSH). Our
research employed two variants of LSH: Random Hyperplane-based LSH and
Query-Aware LSH. We rigorously compared these two approaches across four
parameters: HitRate, Mean Reciprocal Rank (MRR), Average Execution Time, and
Relevance.
Our study revealed that the Query-Aware (QA) approach showed superior
performance over the Random Hyperplane-based (RH) method. Specifically, it
exhibited a notable improvement of 20% to 35% in HitRate for query pairs
compared to the RH approach. Furthermore, the QA approach proved significantly
more time-efficient, with its speed in creating hashing tables and assigning
data samples to buckets being at least four times faster. It can return code
examples within milliseconds, whereas the RH approach typically requires
several seconds to recommend code examples. Due to the superior performance of
the QA approach, we tested it against PostFinder and FaCoY, the
state-of-the-art baselines. Our QA method showed comparable efficiency proving
its potential for effective code recommendation
On Using Machine Learning to Identify Knowledge in API Reference Documentation
Using API reference documentation like JavaDoc is an integral part of
software development. Previous research introduced a grounded taxonomy that
organizes API documentation knowledge in 12 types, including knowledge about
the Functionality, Structure, and Quality of an API. We study how well modern
text classification approaches can automatically identify documentation
containing specific knowledge types. We compared conventional machine learning
(k-NN and SVM) and deep learning approaches trained on manually annotated Java
and .NET API documentation (n = 5,574). When classifying the knowledge types
individually (i.e., multiple binary classifiers) the best AUPRC was up to 87%.
The deep learning and SVM classifiers seem complementary. For four knowledge
types (Concept, Control, Pattern, and Non-Information), SVM clearly outperforms
deep learning which, on the other hand, is more accurate for identifying the
remaining types. When considering multiple knowledge types at once (i.e.,
multi-label classification) deep learning outperforms na\"ive baselines and
traditional machine learning achieving a MacroAUC up to 79%. We also compared
classifiers using embeddings pre-trained on generic text corpora and
StackOverflow but did not observe significant improvements. Finally, to assess
the generalizability of the classifiers, we re-tested them on a different,
unseen Python documentation dataset. Classifiers for Functionality, Concept,
Purpose, Pattern, and Directive seem to generalize from Java and .NET to Python
documentation. The accuracy related to the remaining types seems API-specific.
We discuss our results and how they inform the development of tools for
supporting developers sharing and accessing API knowledge. Published article:
https://doi.org/10.1145/3338906.333894
A Decade of Code Comment Quality Assessment: A Systematic Literature Review
Code comments are important artifacts in software systems and play a
paramount role in many software engineering (SE) tasks related to maintenance
and program comprehension. However, while it is widely accepted that high
quality matters in code comments just as it matters in source code, assessing
comment quality in practice is still an open problem. First and foremost, there
is no unique definition of quality when it comes to evaluating code comments.
The few existing studies on this topic rather focus on specific attributes of
quality that can be easily quantified and measured. Existing techniques and
corresponding tools may also focus on comments bound to a specific programming
language, and may only deal with comments with specific scopes and clear goals
(e.g., Javadoc comments at the method level, or in-body comments describing
TODOs to be addressed). In this paper, we present a Systematic Literature
Review (SLR) of the last decade of research in SE to answer the following
research questions: (i) What types of comments do researchers focus on when
assessing comment quality? (ii) What quality attributes (QAs) do they consider?
(iii) Which tools and techniques do they use to assess comment quality?, and
(iv) How do they evaluate their studies on comment quality assessment in
general? Our evaluation, based on the analysis of 2353 papers and the actual
review of 47 relevant ones, shows that (i) most studies and techniques focus on
comments in Java code, thus may not be generalizable to other languages, and
(ii) the analyzed studies focus on four main QAs of a total of 21 QAs
identified in the literature, with a clear predominance of checking consistency
between comments and the code. We observe that researchers rely on manual
assessment and specific heuristics rather than the automated assessment of the
comment quality attributes
- …