3 research outputs found
Generalizing Cross-Document Event Coreference Resolution Across Multiple Corpora
Cross-document event coreference resolution (CDCR) is an NLP task in which
mentions of events need to be identified and clustered throughout a collection
of documents. CDCR aims to benefit downstream multi-document applications, but
despite recent progress on corpora and system development, downstream
improvements from applying CDCR have not been shown yet. We make the
observation that every CDCR system to date was developed, trained, and tested
only on a single respective corpus. This raises strong concerns on their
generalizability -- a must-have for downstream applications where the magnitude
of domains or event mentions is likely to exceed those found in a curated
corpus. To investigate this assumption, we define a uniform evaluation setup
involving three CDCR corpora: ECB+, the Gun Violence Corpus and the Football
Coreference Corpus (which we reannotate on token level to make our analysis
possible). We compare a corpus-independent, feature-based system against a
recent neural system developed for ECB+. Whilst being inferior in absolute
numbers, the feature-based system shows more consistent performance across all
corpora whereas the neural system is hit-and-miss. Via model introspection, we
find that the importance of event actions, event time, etc. for resolving
coreference in practice varies greatly between the corpora. Additional analysis
shows that several systems overfit on the structure of the ECB+ corpus. We
conclude with recommendations on how to achieve generally applicable CDCR
systems in the future -- the most important being that evaluation on multiple
CDCR corpora is strongly necessary. To facilitate future research, we release
our dataset, annotation guidelines, and system implementation to the public.Comment: Accepted at CL Journa
MECA: Mathematical Expression Based Post Publication Content Analysis
Mathematical expressions (ME) are critical abstractions for technical publications. While the sheer volume of technical publications grows in time, few ME centric applications have been developed due to the steep gap between the typesetting data in post-publication digital documents and the high-level technical semantics. With the acceleration of the technical publications every year, word-based information analysis technologies are inadequate to enable users in discovery, organizing, and interrelating technical work efficiently and effectively.
This dissertation presents a modeling framework and the associated algorithms, called the mathematical-centered post-publication content analysis (MECA) system to address several critical issues to build a layered solution architecture for recovery of high-level technical information. Overall, MECA is consisted of four layers of modeling work, starting from the extraction of MEs from Portable Document Format (PDF) files. Specifically, a weakly-supervised sequential typesetting Bayesian model is developed by using a concise font-value based feature space for Bayesian inference of ME vs. words for the rendering units separated by space. A Markov Random Field (MRF) model is designed to merge and correct the MEs identified from the rendering units, which are otherwise prone to fragmentation of large MEs.
At the next layer, MECA aims at the recovery of ME semantics. The first step is the ME layout analysis to disambiguate layout structures based on a Content-Constrained Spatial (CCS) global inference model to overcome local errors. It achieves high accuracy at low computing cost by a parametric lognormal model for the feature distribution of
typographic systems. The ME layout is parsed into ME semantics with a three-phase processing workflow to overcome a variety of semantic ambiguities. In the first phase, the ME layout is linearized into a token sequence, upon which the abstract syntax tree (AST) is constructed in the second phase using probabilistic context-free grammar. Tree rewriting will transform the AST into ME objects in the third phase.
Built upon the two layers of ME extraction and semantics modeling work, next we explore one of the bonding relationships between words and MEs: ME declarations, where the words and MEs are respectively the qualitative and quantitative (QuQn) descriptors of technical concepts. Conventional low-level PoS tagging and parsing tools have poor performance in the processing of this type of mixed word-ME (MWM) sentences. As such, we develop an MWM processing toolkit. A semi-automated weakly-supervised framework is employed for mining of declaration templates from a large amount of unlabeled data so that the templates can be used for the detection of ME declarations.
On the basis of the three low-level content extraction and prediction solutions, the MECA system can extract MEs, interpret their mathematical semantics, and identify their bonding declaration words. By analyzing the dependency among these elements in a paper, we can construct a QuQn map, which essentially represents the reasoning flow of a paper. Three case studies are conducted for QuQn map applications: differential content comparison of papers, publication trend generation, and interactive mathematical learning. Outcomes from these studies suggest that MECA is a highly practical content analysis technology based on a theoretically sound framework. Much more can be expanded and improved upon for the next generation of deep content analysis solutions