29,982 research outputs found
A Survey on Semantic Parsing
A significant amount of information in today's world is stored in structured
and semi-structured knowledge bases. Efficient and simple methods to query them
are essential and must not be restricted to only those who have expertise in
formal query languages. The field of semantic parsing deals with converting
natural language utterances to logical forms that can be easily executed on a
knowledge base. In this survey, we examine the various components of a semantic
parsing system and discuss prominent work ranging from the initial rule based
methods to the current neural approaches to program synthesis. We also discuss
methods that operate using varying levels of supervision and highlight the key
challenges involved in the learning of such systems.Comment: AKBC 201
Context Dependent Semantic Parsing: A Survey
Semantic parsing is the task of translating natural language utterances into
machine-readable meaning representations. Currently, most semantic parsing
methods are not able to utilize contextual information (e.g. dialogue and
comments history), which has a great potential to boost semantic parsing
performance. To address this issue, context dependent semantic parsing has
recently drawn a lot of attention. In this survey, we investigate progress on
the methods for the context dependent semantic parsing, together with the
current datasets and tasks. We then point out open problems and challenges for
future research in this area. The collected resources for this topic are
available
at:https://github.com/zhuang-li/Contextual-Semantic-Parsing-Paper-List.Comment: 10 pages, acceteped by COLING'202
A Survey of Syntactic-Semantic Parsing Based on Constituent and Dependency Structures
Syntactic and semantic parsing has been investigated for decades, which is
one primary topic in the natural language processing community. This article
aims for a brief survey on this topic. The parsing community includes many
tasks, which are difficult to be covered fully. Here we focus on two of the
most popular formalizations of parsing: constituent parsing and dependency
parsing. Constituent parsing is majorly targeted to syntactic analysis, and
dependency parsing can handle both syntactic and semantic analysis. This
article briefly reviews the representative models of constituent parsing and
dependency parsing, and also dependency graph parsing with rich semantics.
Besides, we also review the closely-related topics such as cross-domain,
cross-lingual and joint parsing models, parser application as well as corpus
development of parsing in the article.Comment: SCIENCE CHINA Technological Science
The Gap of Semantic Parsing: A Survey on Automatic Math Word Problem Solvers
Solving mathematical word problems (MWPs) automatically is challenging,
primarily due to the semantic gap between human-readable words and
machine-understandable logics. Despite the long history dated back to the1960s,
MWPs have regained intensive attention in the past few years with the
advancement of Artificial Intelligence (AI). Solving MWPs successfully is
considered as a milestone towards general AI. Many systems have claimed
promising results in self-crafted and small-scale datasets. However, when
applied on large and diverse datasets, none of the proposed methods in the
literature achieves high precision, revealing that current MWP solvers still
have much room for improvement. This motivated us to present a comprehensive
survey to deliver a clear and complete picture of automatic math problem
solvers. In this survey, we emphasize on algebraic word problems, summarize
their extracted features and proposed techniques to bridge the semantic gap and
compare their performance in the publicly accessible datasets. We also cover
automatic solvers for other types of math problems such as geometric problems
that require the understanding of diagrams. Finally, we identify several
emerging research directions for the readers with interests in MWPs.Comment: 18 pages, 5 figure
A Survey on Semantic Parsing from the perspective of Compositionality
Different from previous surveys in semantic parsing (Kamath and Das, 2018)
and knowledge base question answering(KBQA)(Chakraborty et al., 2019; Zhu et
al., 2019; Hoffner et al., 2017) we try to takes a different perspective on the
study of semantic parsing. Specifically, we will focus on (a)meaning
composition from syntactical structure(Partee, 1975), and (b) the ability of
semantic parsers to handle lexical variation given the context of a knowledge
base (KB). In the following section after an introduction of the field of
semantic parsing and its uses in KBQA, we will describe meaning representation
using grammar formalism CCG (Steedman, 1996). We will discuss semantic
composition using formal languages in Section 2. In section 3 we will consider
systems that uses formal languages e.g. -calculus (Steedman, 1996),
-DCS (Liang, 2013). Section 4 and 5 consider semantic parser using
structured-language for logical form. Section 6 is on different benchmark
datasets ComplexQuestions (Bao et al.,2016) and GraphQuestions (Su et al.,
2016) that can be used to evaluate semantic parser on their ability to answer
complex questions that are highly compositional in nature
Searching for Efficient Multi-Scale Architectures for Dense Image Prediction
The design of neural network architectures is an important component for
achieving state-of-the-art performance with machine learning systems across a
broad array of tasks. Much work has endeavored to design and build
architectures automatically through clever construction of a search space
paired with simple learning algorithms. Recent progress has demonstrated that
such meta-learning methods may exceed scalable human-invented architectures on
image classification tasks. An open question is the degree to which such
methods may generalize to new domains. In this work we explore the construction
of meta-learning techniques for dense image prediction focused on the tasks of
scene parsing, person-part segmentation, and semantic image segmentation.
Constructing viable search spaces in this domain is challenging because of the
multi-scale representation of visual information and the necessity to operate
on high resolution imagery. Based on a survey of techniques in dense image
prediction, we construct a recursive search space and demonstrate that even
with efficient random search, we can identify architectures that outperform
human-invented architectures and achieve state-of-the-art performance on three
dense prediction tasks including 82.7\% on Cityscapes (street scene parsing),
71.3\% on PASCAL-Person-Part (person-part segmentation), and 87.9\% on PASCAL
VOC 2012 (semantic image segmentation). Additionally, the resulting
architecture is more computationally efficient, requiring half the parameters
and half the computational cost as previous state of the art systems.Comment: Accepted by NIPS 201
Machine Learning with World Knowledge: The Position and Survey
Machine learning has become pervasive in multiple domains, impacting a wide
variety of applications, such as knowledge discovery and data mining, natural
language processing, information retrieval, computer vision, social and health
informatics, ubiquitous computing, etc. Two essential problems of machine
learning are how to generate features and how to acquire labels for machines to
learn. Particularly, labeling large amount of data for each domain-specific
problem can be very time consuming and costly. It has become a key obstacle in
making learning protocols realistic in applications. In this paper, we will
discuss how to use the existing general-purpose world knowledge to enhance
machine learning processes, by enriching the features or reducing the labeling
work. We start from the comparison of world knowledge with domain-specific
knowledge, and then introduce three key problems in using world knowledge in
learning processes, i.e., explicit and implicit feature representation,
inference for knowledge linking and disambiguation, and learning with direct or
indirect supervision. Finally we discuss the future directions of this research
topic
Joint learning of ontology and semantic parser from text
Semantic parsing methods are used for capturing and representing semantic
meaning of text. Meaning representation capturing all the concepts in the text
may not always be available or may not be sufficiently complete. Ontologies
provide a structured and reasoning-capable way to model the content of a
collection of texts. In this work, we present a novel approach to joint
learning of ontology and semantic parser from text. The method is based on
semi-automatic induction of a context-free grammar from semantically annotated
text. The grammar parses the text into semantic trees. Both, the grammar and
the semantic trees are used to learn the ontology on several levels -- classes,
instances, taxonomic and non-taxonomic relations. The approach was evaluated on
the first sentences of Wikipedia pages describing people
Using Syntax-Based Machine Translation to Parse English into Abstract Meaning Representation
We present a parser for Abstract Meaning Representation (AMR). We treat
English-to-AMR conversion within the framework of string-to-tree, syntax-based
machine translation (SBMT). To make this work, we transform the AMR structure
into a form suitable for the mechanics of SBMT and useful for modeling. We
introduce an AMR-specific language model and add data and features drawn from
semantic resources. Our resulting AMR parser improves upon state-of-the-art
results by 7 Smatch points.Comment: 10 pages, 8 figure
AppTechMiner: Mining Applications and Techniques from Scientific Articles
This paper presents AppTechMiner, a rule-based information extraction
framework that automatically constructs a knowledge base of all application
areas and problem solving techniques. Techniques include tools, methods,
datasets or evaluation metrics. We also categorize individual research articles
based on their application areas and the techniques proposed/improved in the
article. Our system achieves high average precision (~82%) and recall (~84%) in
knowledge base creation. It also performs well in application and technique
assignment to an individual article (average accuracy ~66%). In the end, we
further present two use cases presenting a trivial information retrieval system
and an extensive temporal analysis of the usage of techniques and application
areas. At present, we demonstrate the framework for the domain of computational
linguistics but this can be easily generalized to any other field of research.Comment: JCDL 2017, 6th International Workshop on Mining Scientific
Publications. arXiv admin note: substantial text overlap with
arXiv:1608.0638
- …