8 research outputs found
Jointly Multiple Events Extraction via Attention-based Graph Information Aggregation
Event extraction is of practical utility in natural language processing. In
the real world, it is a common phenomenon that multiple events existing in the
same sentence, where extracting them are more difficult than extracting a
single event. Previous works on modeling the associations between events by
sequential modeling methods suffer a lot from the low efficiency in capturing
very long-range dependencies. In this paper, we propose a novel Jointly
Multiple Events Extraction (JMEE) framework to jointly extract multiple event
triggers and arguments by introducing syntactic shortcut arcs to enhance
information flow and attention-based graph convolution networks to model graph
information. The experiment results demonstrate that our proposed framework
achieves competitive results compared with state-of-the-art methods.Comment: accepted by EMNLP 201
Opinion Retrieval in Twitter
We consider the problem of finding opinionated tweets about a given topic. We automatically construct opinionated lexica from sets of tweets matching specific patterns indicative of opinionated messages. When incorporated into a learning-to-rank approach, results show that this automatically opinionated information yields retrieval performance comparable with a manual method. Finally, topic-related specific structured tweet sets can help improve query-dependent opinion retrieval
Automatically Assessing Wikipedia Article Quality by Exploiting Article–Editor Networks
Abstract. We consider the problem of automatically assessing Wikipedia article quality. We develop several models to rank articles by using the editing rela-tions between articles and editors. First, we create a basic model by modeling the article-editor network. Then we design measures of an editor’s contribution and build weighted models that improve the ranking performance. Finally, we use a combination of featured article information and the weighted models to obtain the best performance. We find that using manual evaluation to assist automatic eval-uation is a viable solution for the article quality assessment task on Wikipedia.