103 research outputs found

    Korreferentzia-ebazpena euskarazko testuetan.

    Get PDF
    203 p.Gaur egun, korreferentzia-ebazpen automatikoa gakotzat har dezakegu testuak ulertuahal izateko; ondorioz, behar-beharrezkoa da diskurtsoaren ulerkuntza sakona eskatzenduten Lengoaia Naturalaren Prozesamenduko (NLP) hainbat atazatan.Testu bateko bi espresio testualek objektu berbera adierazi edo erreferentziatzendutenean, bi espresio horien artean korreferentzia-erlazio bat dagoela esan ohi da. Testubatean ager daitezkeen espresio testual horien arteko korreferentzia-erlazioak ebazteahelburu duen atazari korreferentzia-ebazpena deritzo.Tesi-lan hau, hizkuntzalaritza konputazionalaren arloan kokatzen da eta euskarazidatzitako testuen korreferentzia-ebazpen automatikoa du helburu, zehazkiago esanda,euskarazko korreferentzia-ebazpen automatikoa gauzatzeko dagoen baliabide eta tresnenhutsunea betetzea du helburu.Tesi-lan honetan, lehenik euskarazko testuetan ager daitezkeen espresio testualakautomatikoki identifikatzeko garatu dugun erregelatan oinarritutako tresna azaltzen da.Ondoren, Stanfordeko unibertsitatean ingeleserako diseinatu den erregelatanoinarritutako korreferentzia-ebazpenerako sistema euskararen ezaugarrietara nolaegokitu den eta ezagutza-base semantikoak erabiliz nola hobetu dugun aurkezten da.Bukatzeko, ikasketa automatikoan oinarritzen den BART korreferentzia-ebazpenerakosistema euskarara egokitzeko eta hobetzeko egindako lana azaltzen da

    Tex2kor: sekuentziatik sekuentziarako euskararako korreferentzia-ebazpena

    Get PDF
    [EU]Korreferentzia-ebazpena testuko bi aipamenek mundu errealeko entitate bera erreferentziatzen dutela identi katzeari deritzo. Lan honetan, korreferentzia-ebazpena sekuentziatik sekuentziara lantzeko hurbilpen berri bat aurkezten da. Sekuentziatik sekuentziarako ataza burutzeko Transformer arkitektura neuronala erabili da. Transformerrak ikasketarako darabiltzan sekuentzien luzera mugatzeko, dokumentu etiketatuak zatitu eta elkartzeko algoritmo bat sortu da. Euskararako korreferentzia-ebazpena helburu izanik, euskararako emaitzak hobetzeko datu gehikuntzako teknikak eta BPE segmentazioa gehitu zaizkio hurbilpenari eta tex2kor sistema eraiki dugu. Testu hutsetik korreferentzia-kateak eskuratzeko sistemak, CoNLL metrikan 37,14 puntuko F1 balioa lortu du. Honenbestez, euskararako korreferentzia-ebazpenerako zeuden emaitzak hobetzerik lortu ez den arren, korreferentzia-ebazpena lantzeko hurbilpen orokor berri bat aurkeztu da.[EN]Coreference resolution is the task of identifying the mentions that refer to the same real world entity. In this work, we present a novel sequence to sequence approach for coreference resolution, for which we use a Transformer. To limit the length of the sequences for the training of the Transformer, we create an algorithm to divide and merge the labeled documents. As our aim is the coreference resolution for Basque, we added some data augmentation techniques and BPE segmentation to build our tex2kor system. The system which converts raw text into coreference-chains, gets F1 37.14 points on CoNLL metric. Therefore, although we did not improve the results of the state of the art system for coreference resolution for Basque, we present a new general approach for coreference resolution

    Tex2kor: sekuentziatik sekuentziarako euskararako korreferentzia-ebazpena

    Get PDF
    [EU]Korreferentzia-ebazpena testuko bi aipamenek mundu errealeko entitate bera erreferentziatzen dutela identi katzeari deritzo. Lan honetan, korreferentzia-ebazpena sekuentziatik sekuentziara lantzeko hurbilpen berri bat aurkezten da. Sekuentziatik sekuentziarako ataza burutzeko Transformer arkitektura neuronala erabili da. Transformerrak ikasketarako darabiltzan sekuentzien luzera mugatzeko, dokumentu etiketatuak zatitu eta elkartzeko algoritmo bat sortu da. Euskararako korreferentzia-ebazpena helburu izanik, euskararako emaitzak hobetzeko datu gehikuntzako teknikak eta BPE segmentazioa gehitu zaizkio hurbilpenari eta tex2kor sistema eraiki dugu. Testu hutsetik korreferentzia-kateak eskuratzeko sistemak, CoNLL metrikan 37,14 puntuko F1 balioa lortu du. Honenbestez, euskararako korreferentzia-ebazpenerako zeuden emaitzak hobetzerik lortu ez den arren, korreferentzia-ebazpena lantzeko hurbilpen orokor berri bat aurkeztu da.[EN]Coreference resolution is the task of identifying the mentions that refer to the same real world entity. In this work, we present a novel sequence to sequence approach for coreference resolution, for which we use a Transformer. To limit the length of the sequences for the training of the Transformer, we create an algorithm to divide and merge the labeled documents. As our aim is the coreference resolution for Basque, we added some data augmentation techniques and BPE segmentation to build our tex2kor system. The system which converts raw text into coreference-chains, gets F1 37.14 points on CoNLL metric. Therefore, although we did not improve the results of the state of the art system for coreference resolution for Basque, we present a new general approach for coreference resolution

    Neurona-sareetan oinarritutako korreferentzia-ebazpen automatikoa

    Get PDF
    Neurona-sareetan oinarritutako euskararako lehenengo korreferentzia-ebazpenerako sistema aurkezten da GrAL honetan. Horretarako polonierarako eraiki berri den sistema hartu da oinarritzat eta euskararen ezaugarrietara moldatu da. Korreferentzia anotatuta duen EPEC (euskarazko erreferentziazko corpusa) corpusaren zatia, 45.000 hitzetakoa, erabili da neurona-sarea entrenatu eta ebaluatzeko. Ondoren, sistema hobetzeko asmotan, hainbat proba egin dira: hitz-embeddingen dimentsioak handitu, ezaugarri berriak gehitu eta neurona-sarearen parametroak aldatu. CoNLL metrikan lortu den emaitzarik onena % 54,66 puntuko F-measurea izan da urrezko aipamenekin eta % 41,20 puntuko F-measurea aipamen-detektatzaile automatikoarekin

    Apports des analyses syntaxiques pour la détection automatique de mentions dans un corpus de français oral

    Get PDF
    National audienceWe present three experiments in detecting entity mentions in the corpus of oral French ANCOR, using publicly available parsing tools and state-of-the-art mention detection techniques used in coreference detection, anaphora resolution and Entity Detection and Tracking systems. While the tools we use are not specifically designed to deal with oral French, our results are comparable to those of state-of-the-art end-to-end systems for other languages. We also mention several ways to improve our results for future work in developing an end-to-end coreference resolution system for French, to which these experiments could be a baseline for mention detection.Cet article présente trois expériences de détection de mentions dans un corpus de français oral : ANCOR. Ces expériences utilisent des outils préexistants d'analyse syntaxique du français et des méthodes issues de travaux sur la coréférence, les anaphores et la détection d'entités nommées. Bien que ces outils ne soient pas optimisés pour le traitement de l'oral, la qualité de la détection des mentions que nous obtenons est comparable à l'état de l'art des systèmes conçus pour l'écrit dans d'autres langues. Nous concluons en proposant des perspectives pour l'amélioration des résultats que nous obtenons et la construction d'un système end-to-end pour lequel nos expériences peuvent servir de base de travail

    Deep learning methods for knowledge base population

    Get PDF
    Knowledge bases store structured information about entities or concepts of the world and can be used in various applications, such as information retrieval or question answering. A major drawback of existing knowledge bases is their incompleteness. In this thesis, we explore deep learning methods for automatically populating them from text, addressing the following tasks: slot filling, uncertainty detection and type-aware relation extraction. Slot filling aims at extracting information about entities from a large text corpus. The Text Analysis Conference yearly provides new evaluation data in the context of an international shared task. We develop a modular system to address this challenge. It was one of the top-ranked systems in the shared task evaluations in 2015. For its slot filler classification module, we propose contextCNN, a convolutional neural network based on context splitting. It improves the performance of the slot filling system by 5.0% micro and 2.9% macro F1. To train our binary and multiclass classification models, we create a dataset using distant supervision and reduce the number of noisy labels with a self-training strategy. For model optimization and evaluation, we automatically extract a labeled benchmark for slot filler classification from the manual shared task assessments from 2012-2014. We show that results on this benchmark are correlated with slot filling pipeline results with a Pearson's correlation coefficient of 0.89 (0.82) on data from 2013 (2014). The combination of patterns, support vector machines and contextCNN achieves the best results on the benchmark with a micro (macro) F1 of 51% (53%) on test. Finally, we analyze the results of the slot filling pipeline and the impact of its components. For knowledge base population, it is essential to assess the factuality of the statements extracted from text. From the sentence "Obama was rumored to be born in Kenya", a system should not conclude that Kenya is the place of birth of Obama. Therefore, we address uncertainty detection in the second part of this thesis. We investigate attention-based models and make a first attempt to systematize the attention design space. Moreover, we propose novel attention variants: External attention, which incorporates an external knowledge source, k-max average attention, which only considers the vectors with the k maximum attention weights, and sequence-preserving attention, which allows to maintain order information. Our convolutional neural network with external k-max average attention sets the new state of the art on a Wikipedia benchmark dataset with an F1 score of 68%. To the best of our knowledge, we are the first to integrate an uncertainty detection component into a slot filling pipeline. It improves precision by 1.4% and micro F1 by 0.4%. In the last part of the thesis, we investigate type-aware relation extraction with neural networks. We compare different models for joint entity and relation classification: pipeline models, jointly trained models and globally normalized models based on structured prediction. First, we show that using entity class prediction scores instead of binary decisions helps relation classification. Second, joint training clearly outperforms pipeline models on a large-scale distantly supervised dataset with fine-grained entity classes. It improves the area under the precision-recall curve from 0.53 to 0.66. Third, we propose a model with a structured prediction output layer, which globally normalizes the score of a triple consisting of the classes of two entities and the relation between them. It improves relation extraction results by 4.4% F1 on a manually labeled benchmark dataset. Our analysis shows that the model learns correct correlations between entity and relation classes. Finally, we are the first to use neural networks for joint entity and relation classification in a slot filling pipeline. The jointly trained model achieves the best micro F1 score with a score of 22% while the neural structured prediction model performs best in terms of macro F1 with a score of 25%

    Towards Multilingual Coreference Resolution

    Get PDF
    The current work investigates the problems that occur when coreference resolution is considered as a multilingual task. We assess the issues that arise when a framework using the mention-pair coreference resolution model and memory-based learning for the resolution process are used. Along the way, we revise three essential subtasks of coreference resolution: mention detection, mention head detection and feature selection. For each of these aspects we propose various multilingual solutions including both heuristic, rule-based and machine learning methods. We carry out a detailed analysis that includes eight different languages (Arabic, Catalan, Chinese, Dutch, English, German, Italian and Spanish) for which datasets were provided by the only two multilingual shared tasks on coreference resolution held so far: SemEval-2 and CoNLL-2012. Our investigation shows that, although complex, the coreference resolution task can be targeted in a multilingual and even language independent way. We proposed machine learning methods for each of the subtasks that are affected by the transition, evaluated and compared them to the performance of rule-based and heuristic approaches. Our results confirmed that machine learning provides the needed flexibility for the multilingual task and that the minimal requirement for a language independent system is a part-of-speech annotation layer provided for each of the approached languages. We also showed that the performance of the system can be improved by introducing other layers of linguistic annotations, such as syntactic parses (in the form of either constituency or dependency parses), named entity information, predicate argument structure, etc. Additionally, we discuss the problems occurring in the proposed approaches and suggest possibilities for their improvement
    • …
    corecore