370 research outputs found
Extracting Biomolecular Interactions Using Semantic Parsing of Biomedical Text
We advance the state of the art in biomolecular interaction extraction with
three contributions: (i) We show that deep, Abstract Meaning Representations
(AMR) significantly improve the accuracy of a biomolecular interaction
extraction system when compared to a baseline that relies solely on surface-
and syntax-based features; (ii) In contrast with previous approaches that infer
relations on a sentence-by-sentence basis, we expand our framework to enable
consistent predictions over sets of sentences (documents); (iii) We further
modify and expand a graph kernel learning framework to enable concurrent
exploitation of automatically induced AMR (semantic) and dependency structure
(syntactic) representations. Our experiments show that our approach yields
interaction extraction systems that are more robust in environments where there
is a significant mismatch between training and test conditions.Comment: Appearing in Proceedings of the Thirtieth AAAI Conference on
Artificial Intelligence (AAAI-16
Guiding AMR Parsing with Reverse Graph Linearization
Abstract Meaning Representation (AMR) parsing aims to extract an abstract
semantic graph from a given sentence. The sequence-to-sequence approaches,
which linearize the semantic graph into a sequence of nodes and edges and
generate the linearized graph directly, have achieved good performance.
However, we observed that these approaches suffer from structure loss
accumulation during the decoding process, leading to a much lower F1-score for
nodes and edges decoded later compared to those decoded earlier. To address
this issue, we propose a novel Reverse Graph Linearization (RGL) enhanced
framework. RGL defines both default and reverse linearization orders of an AMR
graph, where most structures at the back part of the default order appear at
the front part of the reversed order and vice versa. RGL incorporates the
reversed linearization to the original AMR parser through a two-pass
self-distillation mechanism, which guides the model when generating the default
linearizations. Our analysis shows that our proposed method significantly
mitigates the problem of structure loss accumulation, outperforming the
previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0
and AMR 3.0 dataset, respectively. The code are available at
https://github.com/pkunlp-icler/AMR_reverse_graph_linearization.Comment: Findings of EMNLP202
Incorporating Graph Information in Transformer-based AMR Parsing
Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that
aims at providing a semantic graph abstraction representing a given text.
Current approaches are based on autoregressive language models such as BART or
T5, fine-tuned through Teacher Forcing to obtain a linearized version of the
AMR graph from a sentence. In this paper, we present LeakDistill, a model and
method that explores a modification to the Transformer architecture, using
structural adapters to explicitly incorporate graph information into the
learned representations and improve AMR parsing performance. Our experiments
show how, by employing word-to-node alignment to embed graph structural
information into the encoder at training time, we can obtain state-of-the-art
AMR parsing through self-knowledge distillation, even without the use of
additional data. We release the code at
\url{http://www.github.com/sapienzanlp/LeakDistill}.Comment: ACL 2023. Please cite authors correctly using both lastnames
("Mart\'inez Lorenzo", "Huguet Cabot"
- …