837 research outputs found

    Indirectly Named Entity Recognition

    Full text link
    [EN] We define here indirectly named entities, as a term to denote multiword expressions referring to known named entities by means of periphrasis.  While named entity recognition is a classical task in natural language processing, little attention has been paid to indirectly named entities and their treatment. In this paper, we try to address this gap, describing issues related to the detection and understanding of indirectly named entities in texts. We introduce a proof of concept for retrieving both lexicalised and non-lexicalised indirectly named entities in French texts. We also show example cases where this proof of concept is applied, and discuss future perspectives. We have initiated the creation of a first lexicon of 712 indirectly named entity entries that is available for future research.This research has been funded by the FEDER (Fonds europĂ©en de dĂ©veloppement rĂ©gional) and selected by the French-Swiss programme Interreg V. We would like to thank Claire Wuillemin for her preliminary work in the DecRIPT project about the State-of-the-Art in NER and SER in 2020. We would also like to thank for their advice Gilles Falquet, Luka Nerima, Eric Wehrli and Jean-Philippe Goldman at the University of Geneva.Kauffmann, A.; Rey, F.; Atanassova, I.; Gaudinat, A.; Greenfield, P.; Madinier, H.; Cardey, S. (2021). Indirectly Named Entity Recognition. Journal of Computer-Assisted Linguistic Research. 5(1):27-46. https://doi.org/10.4995/jclr.2021.15922OJS274651Abney, Steven. 1987. "The English Noun Phrase in its Sentential Aspect." PhD diss., Massachusetts Institute of Technology.Alsharaf, H., S. Cardey, P. Greenfield, D. Limame, and I. Skouratov. 2003. "Fixedness, the complexity and fragility of the phenomenon: some solutions for natural language processing." In Proceedings of ICL17. Prague, Czech Republic: Matfyzpress.Ananthanarayanan, Rema, Vijil Chenthamarakshan, Prasad M Deshpande, and Raghuram Krishnapuram. 2008. "Rule Based Synonyms for Entity Extraction from Noisy Text." In Proceedings of the Second Workshop on Analytics for Noisy Unstructured Text Data AND '08, 31-38. Singapore: Association for Computing Machinery. https://doi.org/10.1145/1390749.1390756Bachellier, Jean-Louis. 1972. "Sur-Nom." Le texte: de la thĂ©orie Ă  la recherche, no. 19: 69-92. doi :10.3406/comm.1972.1283. https://doi.org/10.3406/comm.1972.1283Baldwin, Timothy, and Su Nam Kim. 2013. "Multiword Expressions." In Handbook of Natural Language Processing, Second Edition, edited by Nitin Indurkhya and Fred J. Damerau, 267-292. Boca Raton, USA: CRCPress.Bohn, C., and Kjeti Nørvag. 2010. "Extracting Named Entities and Synonyms from Wikipedia." In Proceedings of the 24th IEEE International Conference on Advanced Information Networking and Applications, 1300-1307. https://doi.org/10.1109/AINA.2010.50Cai, Desheng, and Gongqing Wu. 2019. "Content-aware attributed entity embedding for synonymous named entity discovery." Neurocomputing 329: 237-247. https://doi.org/10.1016/j.neucom.2018.10.055Chakrabarti, K., S. Chaudhuri, T. Cheng, and Dong Xin. 2012. "A framework for robust discovery of entity synonyms." In Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1384-1392, Beijing, China: Association for Computing Machinery. https://doi.org/10.1145/2339530.2339743Charton, Eric, Michel Gagnon, and Benoit Ozell. 2011. "GĂ©nĂ©ration automatique de motifs de dĂ©tection d'entitĂ©s nommĂ©es en utilisant des contenus encyclopĂ©diques (Automatic generation of named entity detection patterns using encyclopedic contents)" [in French]. In Actes de la 18e confĂ©rence sur le Traitement Automatique des Langues Naturelles. Articles longs, 13-24. Montpellier, France: ATALA.Cho, Hyejin, Wonjun Choi, and Hyunju Lee. 2017. "A method for named entity normalization in biomedical articles: application to diseases and plants." BMC bioinformatics 18, no. 1 ( 1-12. https://doi.org/10.1186/s12859-017-1857-8Devlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding." In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171-4186. Minneapolis, Minnesota: Association for Computational Linguistics.Friburger, Nathalie. 2006. "Linguistique et reconnaissance automatique des noms propres." Meta 51, no. 4: 637-650. doi:10.7202/014331ar. https://doi.org/10.7202/014331arGuenoune, Hani, Kevin Cousot, Mathieu Lafourcade, Melissa Mekaoui, and CĂ©dric Lopez. 2020. "A Dataset for Anaphora Analysis in French Emails." In Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference, 165-175. Barcelona, Spain (online): Association for Computational Linguistics.Honnibal, Matthew, and Ines Montani. 2017. "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing."Kampeera, Wannachai, and Sylviane Cardey-Greenfield. 2012. "Building a Lexically and Semantically-Rich Resource for Paraphrase Processing." In Advances in Natural Language Processing, edited by Hitoshi Isahara and Kyoko Kanzaki, 138-143. Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-33983-7_14Kauffmann, Alexis. 2013. "Structural Asymmetries in Machine Translation: The case of English-Japanese". PhD diss., UniversitĂ© de Genève. https://doi.org/10.13097/archive-ouverte/unige:34540.Lample, Guillaume, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. "Neural Architectures for Named Entity Recognition." In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 260-270. San Diego, California: Association for Computational Linguistics. https://doi.org/10.18653/v1/N16-1030Lin, Bill Yuchen, Dong-Ho Lee, M. Shen, Ryan Rene Moreno, X. Huang, Prashant Shiralkar, and X. Ren. 2020. "TriggerNER: Learning with Entity Triggers as Explanations for Named Entity Recognition." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 8503-8511. Online: Association for Computational Linguistics. https://doi.org/10.18653/v1/2020.acl-main.752Lopez, C., Melissa Mekaoui, K. Aubry, Jean Bort, and Philippe Garnier. 2019. "Reconnaissance d'entitĂ©s nommĂ©es itĂ©rative sur une structure en dĂ©pendances syntaxiques avec l'ontologie NERD." Revue des Nouvelles Technologies de l'Information, Extraction et Gestion des connaissances, RNTI-E-35, 81-92.Ma, Jie, Jun Liu, Y. Li, X. Hu, Yudai Pan, S. Sun, and Qika Lin. 2020. "Jointly Optimized Neural Coreference Resolution with Mutual Attention." In Proceedings of the 13th International Conference on Web Search and Data Mining. Houston, Texas, USA: Association for Computing Machinery. https://doi.org/10.1145/3336191.3371787Manning, Christopher D., Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-60. Baltimore, Maryland: Association for Computational Linguistics. https://doi.org/10.3115/v1/P14-5010Martin, Louis, Benjamin Muller, Pedro Javier Ortiz Suarez, Yoann Dupont, Laurent Romary, Eric Villemonte de la Clergerie, Benoıt Sagot, and DjamĂ© Seddah. 2020. "Les modèles de langue contextuels CamemBERT pour le français: impact de la taille et de l'hĂ©tĂ©rogĂ©nĂ©itĂ© des donnĂ©es d'entrainement (CamemBERT Contextual Language Models for French: Impact of Training Data Size and Heterogeneity)" [in French]. In Actes de la 6e confĂ©rence conjointe JournĂ©es d'Etudes sur la Parole (JEP, 33e Ă©dition), Traitement Automatique des Langues Naturelles (TALN, 27e Ă©dition), Rencontre des Etudiants Chercheurs en Informatique pour le' Traitement Automatique des Langues (RECITAL, 22e Ă©dition). Volume 2: Traitement Automatique des Langues Naturelles, 54-65. Nancy, France: ATALA et AFCP.Mitkov, Ruslan. 2014. Anaphora resolution. Routledge. https://doi.org/10.4324/9781315840086Mohamed, Muhidin A., and Mourad Chabane Oussalah. 2020. "A hybrid approach for paraphrase identification based on knowledge-enriched semantic heuristics." Language Resources and Evaluation 54 : 457-485. https://doi.org/10.1007/s10579-019-09466-4Nadeau, David, and Satoshi Sekine. 2007. "A survey of named entity recognition and classification." Lingvisticae Investigationes 30: 3-26. https://doi.org/10.1075/li.30.1.03nadNayel, Hamada A., H. L. Shashirekha, Hiroyuki Shindo, and Yuji Matsumoto. 2019. "Improving Multi-Word Entity Recognition for Biomedical Texts." CoRRabs/1908.05691. arXiv:1908.05691.Nebhi, Kamel. 2013. "Named Entity Disambiguation using Freebase and Syntactic Parsing." In [email protected], Damien, Maud Ehrmann, and Sophie Rosset. 2016. "Evaluating Named Entity Recognition." Chap. 6 in Named Entities for Computational Linguistics, 111-129. John Wiley & Sons, Ltd. https://doi.org/10.1002/9781119268567.ch6Ortiz Suarez, Pedro Javier, Yoann Dupont, Benjamin Muller, Laurent Romary, and Benoıt Sagot. 2020. "Establishing a New State-of-the-Art for French Named Entity Recognition" [in English]. In Proceedings of the 12th Language Resources and Evaluation Conference, 4631-4638. Marseille, France: European Language Resources Association.Petit, GĂ©rard. 2006. "Le nom de marque dĂ©posĂ©e : nom propre, nom commun et terme." Meta 51, no. 4: 690-705. doi:10.7202/014335ar. https://doi.org/10.7202/014335arQu, Meng, Xiang Ren, and Jiawei Han. 2017. "Automatic Synonym Discovery with Knowledge Bases." In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 997-1005. KDD '17. Halifax, NS, Canada: Association for Computing Machinery. https://doi.org/10.1145/3097983.3098185Racicot, AndrĂ©. 2009. "Traduire le monde: Venise du Nord et autres surnoms." L'ActualitĂ© langagière, vol. 6, n° 2, 23. Travaux publics et Services gouvernementaux Canada.Rey, François-Claude, and Kauffmann Alexis. 2021. "French indirectly named entities (version 1.3) [Data set]." Zenodo. https://doi.org/10.5281/zenodo.5158253.Rosales-MĂ©ndez, Henry, Aidan Hogan, and Barbara Poblete. 2019. "Fine-Grained Evaluation for Entity Linking." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 718-727. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1066Sales, Juliano Efson, AndrĂ© Freitas, Brian Davis, and Siegfried Handschuh. 2016. "A Compositional-Distributional Semantic Model for Searching Complex Entity Categories." In Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, 199-208. Berlin, Germany: Association for Computational Linguistics. https://doi.org/10.18653/v1/S16-2025Schmitt, X., S. Kubler, J. Robert, M. Papadakis, and Y. LeTraon. 2019. "A Replicable Comparison Study of NER Software: StanfordNLP, NLTK, OpenNLP, SpaCy, Gate." In Proceedings of the Sixth International Conference on Social Networks Analysis, Management and Security (SNAMS), 338-343. https://doi.org/10.1109/SNAMS.2019.8931850Shang, Jingbo, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. "Learning Named Entity Tagger using Domain-Specific Dictionary." In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2054-2064. Brussels, Belgium: Association for Computational Linguistics. https://doi.org/10.18653/v1/D18-1230Shen, Jiaming, Ruiliang Lyu, Xiang Ren, Michelle Vanni, Brian Sadler, and Jiawei Han. 2019. "Mining entity synonyms with efficient neural set generation." In Proceedings of the AAAI Conference on Artificial Intelligence, 33:249-256. doi:10.1609/aaai.v33i01.3301249. https://doi.org/10.1609/aaai.v33i01.3301249Shinyama, Yusuke, Satoshi Sekine, and Kiyoshi Sudo. 2002. "Automatic Paraphrase Acquisition from News Articles." In Proceedings of the Second International Conference on Human Language Technology Research, 313-318. HLT '02. San Diego, California: Morgan Kaufmann Publishers Inc. https://doi.org/10.3115/1289189.1289218Sjöblom, Paula. 2016. "Commercial names." Chap. V.31 in The Oxford Handbook of Names and Naming, edited by Carole Hough, 453-464. Oxford, UK: Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199656431.013.56Tenney, Ian, Dipanjan Das, and Ellie Pavlick. 2019. "BERT Rediscovers the Classical NLP Pipeline." In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 4593-4601. Florence, Italy: Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1452Treps, Marie. 2012. La rançon de la gloire - Les surnoms de nos politiques. Paris, France: Editions du Seuil.Watanabe, Taiki, Akihiro Tamura, Takashi Ninomiya, Takuya Makino, and Tomoya Iwakura. 2019. "Multi-Task Learning for Chemical Named Entity Recognition with Chemical Compound Paraphrasing." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 6244-6249. Hong Kong, China: Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-1648Wehrli, Eric, and Luka Nerima. 2018. "Anaphora resolution, collocations and translation." In Multiword units in machine translation and translation technology, edited by Johanna Monti, Violeta Seretan, Gloria Corpas Pastor, and Ruslan Mitkov, 244-256. John Benjamins. https://doi.org/10.1075/cilt.341.12wehWehrli, Eric, Violeta Seretan, and Luka Nerima. 2010. "Sentence Analysis and Collocation Identification." In Proceedings of the 2010 Workshop on Multiword Expressions: from Theory to Applications, 28-36. Beijing, China: Coling 2010 Organizing Committee.Weston, L., V. Tshitoyan, J. Dagdelen, O. Kononova, A. Trewartha, K. A. Persson, G. Ceder, and A. Jain. 2019. "Named Entity Recognition and Normalization Applied to Large-Scale Information Extraction from the Materials Science Literature." Journal of Chemical Information and Modeling 59, no. 9: 3692-3702. doi: 10.1021/acs.jcim.9b00470. https://doi.org/10.1021/acs.jcim.9b00470Wu, G., Y. He, and X. Hu. 2018. "Entity Linking: An Issue to Extract Corresponding Entity With Knowledge Base." IEEE Access 6: 6220-6231. doi:10.1109/ACCESS.2017.2787787. https://doi.org/10.1109/ACCESS.2017.2787787Yang, Yiying, Xi Yin, Haiqin Yang, Xingjian Fei, Hao Peng, Kaijie Zhou, Kunfeng Lai, and Jianping Shen. 2021. "KGSynNet: A Novel Entity Synonyms Discovery Framework with Knowledge Graph." In Database Systems for Advanced Applications, edited by Christian S. Jensen, Ee-Peng Lim, De-Nian Yang, Wang-Chien Lee, Vincent S. Tseng, Vana Kalogeraki, Jen-Wei Huang, and Chih-Ya Shen, 174-190. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-73194-6_13Zhang, Ruoyu, Wenpeng Lu, Shoujin Wang, Xueping Peng, Rui Yu, and Yuan Gao. 2021. "Chinese clinical named entity recognition based on stacked neural network." Concurrency and Computation: Practice and Experience : 33:e5775. doi:10.1002/cpe.5775. https://doi.org/10.1002/cpe.577

    Sentence Similarity and Machine Translation

    Get PDF
    Neural machine translation (NMT) systems encode an input sentence into an intermediate representation and then decode that representation into the output sentence. Translation requires deep understanding of language; as a result, NMT models trained on large amounts of data develop a semantically rich intermediate representation. We leverage this rich intermediate representation of NMT systems—in particular, multilingual NMT systems, which learn to map many languages into and out of a joint space—for bitext curation, paraphrasing, and automatic machine translation (MT) evaluation. At a high level, all of these tasks are rooted in similarity: sentence and document alignment requires measuring similarity of sentences and documents, respectively; paraphrasing requires producing output which is similar to an input; and automatic MT evaluation requires measuring the similarity between MT system outputs and corresponding human reference translations. We use multilingual NMT for similarity in two ways: First, we use a multilingual NMT model with a fixed-size intermediate representation (Artetxe and Schwenk, 2018) to produce multilingual sentence embeddings, which we use in both sentence and document alignment. Second, we train a multilingual NMT model and show that it generalizes to the task of generative paraphrasing (i.e., “translating” from Russian to Russian), when used in conjunction with a simple generation algorithm to discourage copying from the input to the output. We also use this model for automatic MT evaluation, to force decode and score MT system outputs conditioned on their respective human reference translations. Since we leverage multilingual NMT models, each method works in many languages using a single model. We show that simple methods, which leverage the intermediate representation of multilingual NMT models trained on large amounts of bitext, outperform prior work in paraphrasing, sentence alignment, document alignment, and automatic MT evaluation. This finding is consistent with recent trends in the natural language processing community, where large language models trained on huge amounts of unlabeled text have achieved state-of-the-art results on tasks such as question answering, named entity recognition, and parsing

    Scalable and Quality-Aware Training Data Acquisition for Conversational Cognitive Services

    Full text link
    Dialog Systems (or simply bots) have recently become a popular human-computer interface for performing user's tasks, by invoking the appropriate back-end APIs (Application Programming Interfaces) based on the user's request in natural language. Building task-oriented bots, which aim at performing real-world tasks (e.g., booking flights), has become feasible with the continuous advances in Natural Language Processing (NLP), Artificial Intelligence (AI), and the countless number of devices which allow third-party software systems to invoke their back-end APIs. Nonetheless, bot development technologies are still in their preliminary stages, with several unsolved theoretical and technical challenges stemming from the ambiguous nature of human languages. Given the richness of natural language, supervised models require a large number of user utterances paired with their corresponding tasks -- called intents. To build a bot, developers need to manually translate APIs to utterances (called canonical utterances) and paraphrase them to obtain a diverse set of utterances. Crowdsourcing has been widely used to obtain such datasets, by paraphrasing the initial utterances generated by the bot developers for each task. However, there are several unsolved issues. First, generating canonical utterances requires manual efforts, making bot development both expensive and hard to scale. Second, since crowd workers may be anonymous and are asked to provide open-ended text (paraphrases), crowdsourced paraphrases may be noisy and incorrect (not conveying the same intent as the given task). This thesis first surveys the state-of-the-art approaches for collecting large training utterances for task-oriented bots. Next, we conduct an empirical study to identify quality issues of crowdsourced utterances (e.g., grammatical errors, semantic completeness). Moreover, we propose novel approaches for identifying unqualified crowd workers and eliminating malicious workers from crowdsourcing tasks. Particularly, we propose a novel technique to promote the diversity of crowdsourced paraphrases by dynamically generating word suggestions while crowd workers are paraphrasing a particular utterance. Moreover, we propose a novel technique to automatically translate APIs to canonical utterances. Finally, we present our platform to automatically generate bots out of API specifications. We also conduct thorough experiments to validate the proposed techniques and models

    Integrating State-of-the-art NLP Tools into Existing Methods to Address Current Challenges in Plagiarism Detection

    Get PDF
    Paraphrase plagiarism occurs when text is deliberately obfuscated to evade detection, deliberate alteration increases the complexity of plagiarism and the difficulty in detecting paraphrase plagiarism. In paraphrase plagiarism, copied texts often contain little or no matching words, and conventional plagiarism detectors, most of which are designed to detect matching stings are ineffective under such condition. The problem of plagiarism detection has been widely researched in recent years with significant progress made particularly in the platform of Pan@Clef competition on plagiarism detection. However further research is required specifically in the area of paraphrase and translation (obfuscation) plagiarism detection as studies show that the state-of-the-art is unsatisfactory. A rational solution to the problem is to apply models that detect plagiarism using semantic features in texts, rather than matching strings. Deep contextualised learning models (DCLMs) have the ability to learn deep textual features that can be used to compare text for semantic similarity. They have been remarkably effective in many natural language processing (NLP) tasks, but have not yet been tested in paraphrase plagiarism detection. The second problem facing conventional plagiarism detection is translation plagiarism, which occurs when copied text is translated to a different language and sometimes paraphrased and used without acknowledging the original sources. The most common method used for detecting cross-lingual plagiarism (CLP) require internet translation services, which is limiting to the detection process in many ways. A rational solution to the problem is to use detection models that do not utilise internet translation services. In this thesis we addressed these ongoing challenges facing conventional plagiarism detection by applying some of the most advanced methods in NLP, which includes contextualised and non-contextualised deep learning models. To address the problem of paraphrased plagiarism, we proposed a novel paraphrase plagiarism detector that integrates deep contextualised learning (DCL) into a generic plagiarism detection framework. Evaluation results revealed that our proposed paraphrase detector outperformed a state-of-art model, and a number of standard baselines in the task of paraphrase plagiarism detection. With respect to CLP detection, we propose a novel multilingual translation model (MTM) based on the Word2Vec (word embedding) model that can effectively translate text across a number of languages, it is independent of the internet and performs comparably, and in many cases better than a common cross-lingual plagiarism detection model that rely on online machine translator. The MTM does not require parallel or comparable corpora, it is therefore designed to resolve the problem of CLPD in low resource languages. The solutions provided in this research advance the state-of-the-art and contribute to the existing body of knowledge in plagiarism detection, and would also have a positive impact on academic integrity that has been under threat for a while by plagiarism

    Semantic Parsing in Limited Resource Conditions

    Full text link
    This thesis explores challenges in semantic parsing, specifically focusing on scenarios with limited data and computational resources. It offers solutions using techniques like automatic data curation, knowledge transfer, active learning, and continual learning. For tasks with no parallel training data, the thesis proposes generating synthetic training examples from structured database schemas. When there is abundant data in a source domain but limited parallel data in a target domain, knowledge from the source is leveraged to improve parsing in the target domain. For multilingual situations with limited data in the target languages, the thesis introduces a method to adapt parsers using a limited human translation budget. Active learning is applied to select source-language samples for manual translation, maximizing parser performance in the target language. In addition, an alternative method is also proposed to utilize machine translation services, supplemented by human-translated data, to train a more effective parser. When computational resources are limited, a continual learning approach is introduced to minimize training time and computational memory. This maintains the parser's efficiency in previously learned tasks while adapting it to new tasks, mitigating the problem of catastrophic forgetting. Overall, the thesis provides a comprehensive set of methods to improve semantic parsing in resource-constrained conditions.Comment: PhD thesis, year of award 2023, 172 page
    • …
    corecore