10 research outputs found

    Projecting named entity tags from a resource rich language to a resource poor language

    Get PDF
    Named Entities (NE) are the prominent entities appearing in textual documents.Automatic classification of NE in a textual corpus is a vital process in Information Extraction and Information Retrieval research. Named Entity Recognition (NER) is the identification of words in text that correspond to a pre-defined taxonomy such as person, organization, location, date, time, etc.This article focuses on the person (PER), organization (ORG) and location (LOC) entities for a Malay journalistic corpus of terrorism.A projection algorithm, using the Dice Coefficient function and bigram scoring method with domain-specific rules, is suggested to map the NE information from the English corpus to the Malay corpus of terrorism.The English corpus is the translated version of the Malay corpus.Hence, these two corpora are treated as parallel corpora. The method computes the string similarity between the English words and the list of available lexemes in a pre-built lexicon that approximates the best NE mapping.The algorithm has been effectively evaluated using our own terrorism tagged corpus; it achieved satisfactory results in terms of precision, recall, and F-measure.An evaluation of the selected open source NER tool for English is also presented

    Generalisation in named entity recognition: A quantitative analysis

    Get PDF
    Named Entity Recognition (NER) is a key NLP task, which is all the more challenging on Web and user-generated content with their diverse and continuously changing language. This paper aims to quantify how this diversity impacts state-of-the-art NER methods, by measuring named entity (NE) and context variability, feature sparsity, and their effects on precision and recall. In particular, our findings indicate that NER approaches struggle to generalise in diverse genres with limited training data. Unseen NEs, in particular, play an important role, which have a higher incidence in diverse genres such as social media than in more regular genres such as newswire. Coupled with a higher incidence of unseen features more generally and the lack of large training corpora, this leads to significantly lower F1 scores for diverse genres as compared to more regular ones. We also find that leading systems rely heavily on surface forms found in training data, having problems generalising beyond these, and offer explanations for this observation

    Improving Robustness and Scalability of Available Ner Systems

    Get PDF
    The focus of this research is to study and develop techniques to adapt existing NER resources to serve the needs of a broad range of organizations without expert NLP manpower. My methods emphasize usability, robustness and scalability of existing NER systems to ensure maximum functionality to a broad range of organizations. Usability is facilitated by ensuring that the methodologies are compatible with any available open-source NER tagger or data set, thus allowing organizations to choose resources that are easy to deploy and maintain and fit their requirements. One way of making use of available tagged data would be to aggregate a number of different tagged sets in an effort to increase the coverage of the NER system. Though, generally, more tagged data can mean a more robust NER model, extra data also introduces a significant amount of noise and complexity into the model as well. Because adding in additional training data to scale up an NER system presents a number of challenges in terms of scalability, this research aims to address these difficulties and provide a means for multiple available training sets to be aggregated while reducing noise, model complexity and training times. In an effort to maintain usability, increase robustness and improve scalability, I designed an approach to merge document clustering of the training data with open-source or available NER software packages and tagged data that can be easily acquired and implemented. Here, a tagged training set is clustered into smaller data sets, and models are then trained on these smaller clusters. This is designed not only to reduce noise by creating more focused models, but also to increase scalability and robustness. Document clustering is used extensively in information retrieval, but has never been used in conjunction with NER

    Pré-processamento de tweets visando melhorar resultados de NERD

    Get PDF
    TCC(graduação) - Universidade Federal de Santa Catarina. Centro Tecnológico. Sistemas de Informação.O enriquecimento semântico das postagens em mídias sociais pode trazer diversos benefícios em aplicações. Todavia, as técnicas e ferramentas de extração de informação atualmente presentes na literatura não trabalham adequadamente com dados provenientes dessas fontes, os quais estão sujeitos a ruídos diversos. Este trabalho propõe um método para filtragem de tweets baseado em normalização léxica visando diminuir ruídos e obter melhores resultados nas tarefas de reconhecimento e desambiguação de entidades nomeadas (NERD). Para realizar tal proposta, este trabalho apresenta uma revisão do estado-da-arte sobre o reconhecimento e desambiguação de entidades nomeadas com foco em mídias sociais, bem como revisa propostas para uma etapa preliminar de filtragem de tweets. De modo a verificar a qualidade do método proposto, foram realizados experimentos com a ferramenta FOX e observou-se um aumento de 5% no número de entidades nomeadas reconhecidas após a normalização léxica dos tweets.The semantic enrichment of posts in social media can bring several benefits in applications. However, the information extraction techniques and tools currently available in the literature are not prepared to work with data from these sources, which are very affected by noises. This work proposes a method for filtering tweets based on lexical normalization to reduce noise and obtain better results in the recognition and naming entity disambiguation (NERD) tasks. To accomplish this, this paper presents a state-of-the-art review of the recognition and disambiguation of social media-focused entities, as well as reviews proposals for a preliminary tweeting filtering step. In order to verify the quality of the proposed method, experiments were performed with the FOX tool and a 5% increase in the number of named entities recognized after the lexical normalization of the tweets was observed

    Domain adaptive bootstrapping for named entity recognition

    Get PDF
    EMNLP 2009 - Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: A Meeting of SIGDAT, a Special Interest Group of ACL, Held in Conjunction with ACL-IJCNLP 20091523-153

    Automatic Extraction and Assessment of Entities from the Web

    Get PDF
    The search for information about entities, such as people or movies, plays an increasingly important role on the Web. This information is still scattered across many Web pages, making it more time consuming for a user to find all relevant information about an entity. This thesis describes techniques to extract entities and information about these entities from the Web, such as facts, opinions, questions and answers, interactive multimedia objects, and events. The findings of this thesis are that it is possible to create a large knowledge base automatically using a manually-crafted ontology. The precision of the extracted information was found to be between 75–90 % (facts and entities respectively) after using assessment algorithms. The algorithms from this thesis can be used to create such a knowledge base, which can be used in various research fields, such as question answering, named entity recognition, and information retrieval
    corecore