3 research outputs found

    SemRe-Rank: Improving Automatic Term Extraction by Incorporating Semantic Relatedness with Personalised PageRank

    Get PDF
    Automatic Term Extraction (ATE) deals with the extraction of terminology from a domain specific corpus, and has long been an established research area in data and knowledge acquisition. ATE remains a challenging task as it is known that there is no existing ATE methods that can consistently outperform others in any domain. This work adopts a refreshed perspective to this problem: instead of searching for such a ā€˜one-size-fit-allā€™ solution that may never exist, we propose to develop generic methods to ā€˜enhanceā€™ existing ATE methods. We introduce SemRe-Rank, the first method based on this principle, to incorporate semantic relatednessā€”an often overlooked venueā€”into an existing ATE method to further improve its performance. SemRe-Rank incorporates word embeddings into a personalised PageRank process to compute ā€˜semantic importanceā€™ scores for candidate terms from a graph of semantically related words (nodes), which are then used to revise the scores of candidate terms computed by a base ATE algorithm. Extensively evaluated with 13 state-of-the-art base ATE methods on four datasets of diverse nature, it is shown to have achieved widespread improvement over all base methods and across all datasets, with up to 15 percentage points when measured by the Precision in the top ranked K candidate terms (the average for a set of Kā€™s), or up to 28 percentage points in F1 measured at a K that equals to the expected real terms in the candidates (F1 in short). Compared to an alternative approach built on the well-known TextRank algorithm, SemRe-Rank can potentially outperform by up to 8 points in Precision at top K, or up to 17 points in F1

    The LODIE team (University of Sheffield) Participation at the TAC2015 Entity Discovery Task of the Cold Start KBP Track

    No full text
    This paper describes the LODIE team (from the OAK lab of the University of Shefļ¬eld) participation at TAC-KBP 2015 for the Entity Discovery task in the Cold Start KBP track. We have taken a cross-document coreference resolution approach that starts with Named EntityRecognitiontolocateandclassifymentions of named entities, followed by a clustering procedure that groups mentions referring to the same entity. Our primary interest was studying different features and their effect on the clustering process, as well as scalablemethodstocopewithverylargedata. We experimented with several feature combinationsandconcludethatthebestresultsareobtained using features based on entity surface forms and distributed word embeddings. To cope with large scale data, the clustering process takes a two-step approach to break data to smaller batches. Our method on the 2015 evaluation dataset obtains a best CEAF mention F-measure of 63.21
    corecore