664 research outputs found
Open Information Extraction: A Review of Baseline Techniques, Approaches, and Applications
With the abundant amount of available online and offline text data, there
arises a crucial need to extract the relation between phrases and summarize the
main content of each document in a few words. For this purpose, there have been
many studies recently in Open Information Extraction (OIE). OIE improves upon
relation extraction techniques by analyzing relations across different domains
and avoids requiring hand-labeling pre-specified relations in sentences. This
paper surveys recent approaches of OIE and its applications on Knowledge Graph
(KG), text summarization, and Question Answering (QA). Moreover, the paper
describes OIE basis methods in relation extraction. It briefly discusses the
main approaches and the pros and cons of each method. Finally, it gives an
overview about challenges, open issues, and future work opportunities for OIE,
relation extraction, and OIE applications.Comment: 15 pages, 9 figure
Recommending Analogical APIs via Knowledge Graph Embedding
Library migration, which re-implements the same software behavior by using a
different library instead of using the current one, has been widely observed in
software evolution. One essential part of library migration is to find an
analogical API that could provide the same functionality as current ones.
However, given the large number of libraries/APIs, manually finding an
analogical API could be very time-consuming and error-prone. Researchers have
developed multiple automated analogical API recommendation techniques.
Documentation-based methods have particularly attracted significant interest.
Despite their potential, these methods have limitations, such as a lack of
comprehensive semantic understanding in documentation and scalability
challenges. In this work, we propose KGE4AR, a novel documentation-based
approach that leverages knowledge graph (KG) embedding to recommend analogical
APIs during library migration. Specifically, KGE4AR proposes a novel unified
API KG to comprehensively and structurally represent three types of knowledge
in documentation, which can better capture the high-level semantics. Moreover,
KGE4AR then proposes to embed the unified API KG into vectors, enabling more
effective and scalable similarity calculation. We build KGE4AR' s unified API
KG for 35,773 Java libraries and assess it in two API recommendation scenarios:
with and without target libraries. Our results show that KGE4AR substantially
outperforms state-of-the-art documentation-based techniques in both evaluation
scenarios in terms of all metrics (e.g., 47.1%-143.0% and 11.7%-80.6% MRR
improvements in each scenario). Additionally, we explore KGE4AR' s scalability,
confirming its effective scaling with the growing number of libraries.Comment: Accepted by FSE 202
A Decade of Code Comment Quality Assessment: A Systematic Literature Review
Code comments are important artifacts in software systems and play a
paramount role in many software engineering (SE) tasks related to maintenance
and program comprehension. However, while it is widely accepted that high
quality matters in code comments just as it matters in source code, assessing
comment quality in practice is still an open problem. First and foremost, there
is no unique definition of quality when it comes to evaluating code comments.
The few existing studies on this topic rather focus on specific attributes of
quality that can be easily quantified and measured. Existing techniques and
corresponding tools may also focus on comments bound to a specific programming
language, and may only deal with comments with specific scopes and clear goals
(e.g., Javadoc comments at the method level, or in-body comments describing
TODOs to be addressed). In this paper, we present a Systematic Literature
Review (SLR) of the last decade of research in SE to answer the following
research questions: (i) What types of comments do researchers focus on when
assessing comment quality? (ii) What quality attributes (QAs) do they consider?
(iii) Which tools and techniques do they use to assess comment quality?, and
(iv) How do they evaluate their studies on comment quality assessment in
general? Our evaluation, based on the analysis of 2353 papers and the actual
review of 47 relevant ones, shows that (i) most studies and techniques focus on
comments in Java code, thus may not be generalizable to other languages, and
(ii) the analyzed studies focus on four main QAs of a total of 21 QAs
identified in the literature, with a clear predominance of checking consistency
between comments and the code. We observe that researchers rely on manual
assessment and specific heuristics rather than the automated assessment of the
comment quality attributes
Software Entity Recognition with Noise-Robust Learning
Recognizing software entities such as library names from free-form text is
essential to enable many software engineering (SE) technologies, such as
traceability link recovery, automated documentation, and API recommendation.
While many approaches have been proposed to address this problem, they suffer
from small entity vocabularies or noisy training data, hindering their ability
to recognize software entities mentioned in sophisticated narratives. To
address this challenge, we leverage the Wikipedia taxonomy to develop a
comprehensive entity lexicon with 79K unique software entities in 12
fine-grained types, as well as a large labeled dataset of over 1.7M sentences.
Then, we propose self-regularization, a noise-robust learning approach, to the
training of our software entity recognition (SER) model by accounting for many
dropouts. Results show that models trained with self-regularization outperform
both their vanilla counterparts and state-of-the-art approaches on our
Wikipedia benchmark and two Stack Overflow benchmarks. We release our models,
data, and code for future research.Comment: ASE 202
Let's Discover More API Relations: A Large Language Model-based AI Chain for Unsupervised API Relation Inference
APIs have intricate relations that can be described in text and represented
as knowledge graphs to aid software engineering tasks. Existing relation
extraction methods have limitations, such as limited API text corpus and
affected by the characteristics of the input text.To address these limitations,
we propose utilizing large language models (LLMs) (e.g., GPT-3.5) as a neural
knowledge base for API relation inference. This approach leverages the entire
Web used to pre-train LLMs as a knowledge base and is insensitive to the
context and complexity of input texts. To ensure accurate inference, we design
our analytic flow as an AI Chain with three AI modules: API FQN Parser, API
Knowledge Extractor, and API Relation Decider. The accuracy of the API FQN
parser and API Relation Decider module are 0.81 and 0.83, respectively. Using
the generative capacity of the LLM and our approach's inference capability, we
achieve an average F1 value of 0.76 under the three datasets, significantly
higher than the state-of-the-art method's average F1 value of 0.40. Compared to
CoT-based method, our AI Chain design improves the inference reliability by
67%, and the AI-crowd-intelligence strategy enhances the robustness of our
approach by 26%
Transferring Cross-domain Knowledge for Video Sign Language Recognition
Word-level sign language recognition (WSLR) is a fundamental task in sign
language interpretation. It requires models to recognize isolated sign words
from videos. However, annotating WSLR data needs expert knowledge, thus
limiting WSLR dataset acquisition. On the contrary, there are abundant
subtitled sign news videos on the internet. Since these videos have no
word-level annotation and exhibit a large domain gap from isolated signs, they
cannot be directly used for training WSLR models. We observe that despite the
existence of a large domain gap, isolated and news signs share the same visual
concepts, such as hand gestures and body movements. Motivated by this
observation, we propose a novel method that learns domain-invariant visual
concepts and fertilizes WSLR models by transferring knowledge of subtitled news
sign to them. To this end, we extract news signs using a base WSLR model, and
then design a classifier jointly trained on news and isolated signs to coarsely
align these two domain features. In order to learn domain-invariant features
within each class and suppress domain-specific features, our method further
resorts to an external memory to store the class centroids of the aligned news
signs. We then design a temporal attention based on the learnt descriptor to
improve recognition performance. Experimental results on standard WSLR datasets
show that our method outperforms previous state-of-the-art methods
significantly. We also demonstrate the effectiveness of our method on
automatically localizing signs from sign news, achieving 28.1 for [email protected]: CVPR2020 (oral) preprin
Recommended from our members
Computational Toxinology
Venoms are complex mixtures of biological macromolecules and other compounds that are used for predatory and defensive purposes by hundreds of thousands of known species worldwide. Throughout human history, venoms and venom components have been used to treat a vast array of illnesses, causing them to be of great clinical, economic, and academic interest to the drug discovery and toxinology communities. In spite of major computational advances that facilitate data-driven drug discovery, most therapeutic venom effects are still discovered via tedious trial-and-error, or simply by accident. In this dissertation, I describe a body of work that aims to establish a new subdiscipline of translational bioinformatics, which I name “computational toxinology”.
To accomplish this goal, I present three integrated components that span a wide range of informatics techniques: (1) VenomKB, (2) VenomSeq, and (3) VenomKB’s Semantic API. To provide a platform for structuring, representing, retrieving, and integrating venom data relevant to drug discovery, VenomKB provides a database-backed web application and knowledge base for computational toxinology. VenomKB is structured according to a fully-featured ontology of venoms, and provides data aggregated from many popular web re- sources. VenomSeq is a biotechnology workflow that is designed to generate new high-throughput sequencing data for incorporation into VenomKB. Specifically, we expose human cells to controlled doses of crude venoms, conduct RNA-Sequencing, and build profiles of differential gene expression, which we then compare to publicly-available differential expression data for known dis- eases and drugs with known effects, and use those comparisons to hypothesize ways that the venoms could act in a therapeutic manner, as well. These data are then integrated into VenomKB, where they can be effectively retrieved and evaluated using existing data and known therapeutic associations. VenomKB’s Semantic API further develops this functionality by providing an intelligent, powerful, and user-friendly interface for querying the complex underlying data in VenomKB in a way that reflects the intuitive, human-understandable mean- ing of those data. The Semantic API is designed to cater to the needs of advanced users as well as laypersons and bench scientists without previous expertise in computational biology and semantic data analysis.
In each chapter of the dissertation, I describe how we evaluated these 3 components through various approaches. We demonstrate the utility of VenomKB and the Semantic API by testing a number of practical use-cases for each, designed to highlight their ability to rediscover existing knowledge as well as suggesting potential areas for future exploration. We use statistics and data science techniques to evaluate VenomSeq on 25 diverse species of venomous animals, and propose biologically feasible explanations for significant findings. In evaluating the Semantic API, I show how observations on VenomSeq data can be interpreted and placed into the context of past research by members of the larger toxinology community.
Computational toxinology is a toolbox designed to be used by multiple stakeholders (toxinologists, computational biologists, and systems pharmacologists, among others) to improve the return rate of clinically-significant findings from manual experimentation. It aims to achieve this goal by enabling access to data, providing means for easy validation of results, and suggesting specific hypotheses that are preliminarily supported by rigorous inferential statistics. All components of the research I describe are open-access and publicly available, to improve reproducibility and encourage widespread adoptio
- …