5,655 research outputs found
Content Differences in Syntactic and Semantic Representations
Syntactic analysis plays an important role in semantic parsing, but the
nature of this role remains a topic of ongoing debate. The debate has been
constrained by the scarcity of empirical comparative studies between syntactic
and semantic schemes, which hinders the development of parsing methods informed
by the details of target schemes and constructions. We target this gap, and
take Universal Dependencies (UD) and UCCA as a test case. After abstracting
away from differences of convention or formalism, we find that most content
divergences can be ascribed to: (1) UCCA's distinction between a Scene and a
non-Scene; (2) UCCA's distinction between primary relations, secondary ones and
participants; (3) different treatment of multi-word expressions, and (4)
different treatment of inter-clause linkage. We further discuss the long tail
of cases where the two schemes take markedly different approaches. Finally, we
show that the proposed comparison methodology can be used for fine-grained
evaluation of UCCA parsing, highlighting both challenges and potential sources
for improvement. The substantial differences between the schemes suggest that
semantic parsers are likely to benefit downstream text understanding
applications beyond their syntactic counterparts.Comment: NAACL-HLT 2019 camera read
Reasoning & Querying – State of the Art
Various query languages for Web and Semantic Web data, both for practical use and as an area of research in the scientific community, have emerged in recent years. At the same time, the broad adoption of the internet where keyword search is used in many applications, e.g. search engines, has familiarized casual users with using keyword queries to retrieve information on the internet. Unlike this easy-to-use querying, traditional query languages require knowledge of the language itself as well as of the data to be queried. Keyword-based query languages for XML and RDF bridge the gap between the two, aiming at enabling simple querying of semi-structured data, which is relevant e.g. in the context of the emerging Semantic Web. This article presents an overview of the field of keyword querying for XML and RDF
Semantic Role Labeling with Associated Memory Network
Semantic role labeling (SRL) is a task to recognize all the
predicate-argument pairs of a sentence, which has been in a performance
improvement bottleneck after a series of latest works were presented. This
paper proposes a novel syntax-agnostic SRL model enhanced by the proposed
associated memory network (AMN), which makes use of inter-sentence attention of
label-known associated sentences as a kind of memory to further enhance
dependency-based SRL. In detail, we use sentences and their labels from train
dataset as an associated memory cue to help label the target sentence.
Furthermore, we compare several associated sentences selecting strategies and
label merging methods in AMN to find and utilize the label of associated
sentences while attending them. By leveraging the attentive memory from known
training data, Our full model reaches state-of-the-art on CoNLL-2009 benchmark
datasets for syntax-agnostic setting, showing a new effective research line of
SRL enhancement other than exploiting external resources such as well
pre-trained language models.Comment: Published at NAACL 2019; This is camera Ready version; Code is
available at https://github.com/Frozenmad/AMN_SR
- …