304 research outputs found
Entitate izendunen desanbiguazioa ezagutza-base erraldoien arabera
130 p.Gaur egun, interneten nabigatzeko orduan, ia-ia ezinbestekoak dira bilatza-ileak, eta guztietatik ezagunena Google da. Bilatzaileek egungo arrakastarenzati handi bat ezagutza-baseen ustiaketatik eskuratu dute. Izan ere, bilaketasemantikoekin kontsulta soilak ezagutza-baseetako informazioaz aberastekogai dira. Esate baterako, musika talde bati buruzko informazioa bilatzean,bere diskografia edo partaideetara esteka gehigarriak eskaintzen dituzte. Her-rialde bateko lehendakariari buruzko informazioa bilatzean, lehendakari izan-dakoen estekak edo lurralde horretako informazio gehigarria eskaintzen dute.Hala ere, gaur egun pil-pilean dauden bilaketa semantikoen arrakasta kolokanjarriko duen arazoa existitzen da. Termino anbiguoek ezagutza-baseetatikeskuratuko den informazioaren egokitasuna baldintzatuko dute. Batez ere,arazo handienak izen berezien edo entitate izendunen aipamenek sortuko di-tuzte.Tesi-lan honen helburu nagusia entitate izendunen desanbiguazioa (EID)aztertu, eta hau burutzeko teknika berriak proposatzea da. EID sistemektestuetako izen-aipamenak desanbiguatu, eta ezagutza-baseetako entitateekinlotuko dituzte. Izen-aipamenen izaera anbiguoa dela eta, hainbat entitateizendatu ditzakete. Gainera, entitate berdina hainbat izen ezberdinekinizendatu daiteke, beraz, aipamen hauek egoki desanbiguatzea tesiaren gakoaizango da.Horretarako, lehenik, arloaren egoeraren oinarri diren bi desanbiguazioeredu aztertuko dira. Batetik, ezagutza-baseen egituraz baliatzen den ereduvglobala, eta bestetik, aipamenaren testuinguruko hitzen informazioa usti-atzen duen eredu lokala. Ondoren, bi informazio iturriak modu osagarriankonbinatuko dira. Konbinazioak arloaren egoerako emaitzak hainbat datu-multzo ezberdinetan gaindituko ditu, eta gainontzekoetan pareko emaitzaklortuko ditu.Bigarrenik, edozein desanbiguazio-sistema hobetzeko helburuarekin ideiaberritzaileak proposatu, aztertu eta ebaluatu dira. Batetik, diskurtso, bil-duma eta agerkidetza mailan entitateen portaera aztertu da, entitateek pa-troi jakin bat betetzen dutela baieztatuz. Ondoren, patroi horretan oinar-rituz eredu globalaren, lokalaren eta beste EID sistema baten emaitzak moduadierazgarrian hobetu dira. Bestetik, eredu lokala kanpotiko corpusetatik es-kuratutako ezagutzarekin elikatu da. Ekarpen honekin kanpo-ezagutza honenkalitatea ebaluatu da sistemari egiten dion ekarpena justifikatuz. Gainera,eredu lokalaren emaitzak hobetzea lortu da, berriz ere arloaren egoerakobalioak eskuratuz.Tesia artikuluen bilduma gisa aurkeztuko da. Sarrera eta arloaren ego-era azaldu ondoren, tesiaren oinarri diren ingelesezko lau artikulu erantsikodira. Azkenik, lau artikuluetan jorratu diren gaiak biltzeko ondorio orokorrakplanteatuko dira
Exposing Digital Content as Linked Data, and Linking them using StoryBlink
Abstract. Digital publications host a large amount of data that currently is not harvested, due to its unstructured nature. However, manually annotating these publications is tedious. Current tools that automatically analyze unstructured text are too fine-grained for larger amounts of text such as books. A workable machine-interpretable version of larger bodies of text is thus necessary. In this paper, we suggest a workflow to automatically create and publish a machine-interpretable version of digital publications as linked data via DBpedia Spotlight. Furthermore, we make use of the Everything is Connected Engine on top of this published linked data to link digital publications using a Web application dubbed "StoryBlink". StoryBlink shows the added value of publishing machineinterpretable content of unstructured digital publications by finding relevant books that are connected to selected classic works. Currently, the time to find a connecting path can be quite long, but this can be overcome by using caching mechanisms, and the relevancy of found paths can be improved by better denoising the DBpedia Spotlight results, or by using alternative disambiguation engines
Entitate izendunen desanbiguazioa ezagutza-base erraldoien arabera
130 p.Gaur egun, interneten nabigatzeko orduan, ia-ia ezinbestekoak dira bilatza-ileak, eta guztietatik ezagunena Google da. Bilatzaileek egungo arrakastarenzati handi bat ezagutza-baseen ustiaketatik eskuratu dute. Izan ere, bilaketasemantikoekin kontsulta soilak ezagutza-baseetako informazioaz aberastekogai dira. Esate baterako, musika talde bati buruzko informazioa bilatzean,bere diskografia edo partaideetara esteka gehigarriak eskaintzen dituzte. Her-rialde bateko lehendakariari buruzko informazioa bilatzean, lehendakari izan-dakoen estekak edo lurralde horretako informazio gehigarria eskaintzen dute.Hala ere, gaur egun pil-pilean dauden bilaketa semantikoen arrakasta kolokanjarriko duen arazoa existitzen da. Termino anbiguoek ezagutza-baseetatikeskuratuko den informazioaren egokitasuna baldintzatuko dute. Batez ere,arazo handienak izen berezien edo entitate izendunen aipamenek sortuko di-tuzte.Tesi-lan honen helburu nagusia entitate izendunen desanbiguazioa (EID)aztertu, eta hau burutzeko teknika berriak proposatzea da. EID sistemektestuetako izen-aipamenak desanbiguatu, eta ezagutza-baseetako entitateekinlotuko dituzte. Izen-aipamenen izaera anbiguoa dela eta, hainbat entitateizendatu ditzakete. Gainera, entitate berdina hainbat izen ezberdinekinizendatu daiteke, beraz, aipamen hauek egoki desanbiguatzea tesiaren gakoaizango da.Horretarako, lehenik, arloaren egoeraren oinarri diren bi desanbiguazioeredu aztertuko dira. Batetik, ezagutza-baseen egituraz baliatzen den ereduvglobala, eta bestetik, aipamenaren testuinguruko hitzen informazioa usti-atzen duen eredu lokala. Ondoren, bi informazio iturriak modu osagarriankonbinatuko dira. Konbinazioak arloaren egoerako emaitzak hainbat datu-multzo ezberdinetan gaindituko ditu, eta gainontzekoetan pareko emaitzaklortuko ditu.Bigarrenik, edozein desanbiguazio-sistema hobetzeko helburuarekin ideiaberritzaileak proposatu, aztertu eta ebaluatu dira. Batetik, diskurtso, bil-duma eta agerkidetza mailan entitateen portaera aztertu da, entitateek pa-troi jakin bat betetzen dutela baieztatuz. Ondoren, patroi horretan oinar-rituz eredu globalaren, lokalaren eta beste EID sistema baten emaitzak moduadierazgarrian hobetu dira. Bestetik, eredu lokala kanpotiko corpusetatik es-kuratutako ezagutzarekin elikatu da. Ekarpen honekin kanpo-ezagutza honenkalitatea ebaluatu da sistemari egiten dion ekarpena justifikatuz. Gainera,eredu lokalaren emaitzak hobetzea lortu da, berriz ere arloaren egoerakobalioak eskuratuz.Tesia artikuluen bilduma gisa aurkeztuko da. Sarrera eta arloaren ego-era azaldu ondoren, tesiaren oinarri diren ingelesezko lau artikulu erantsikodira. Azkenik, lau artikuluetan jorratu diren gaiak biltzeko ondorio orokorrakplanteatuko dira
Recommended from our members
Where are you talking about? Advances and Challenges of Geographic Analysis of Text with Application to Disease Monitoring
The Natural Language Processing task we focus on in this thesis is Geoparsing. Geoparsing is the process of extraction and grounding of toponyms (place names). Consider this sentence: "The victims of the Spanish earthquake off the coast of Malaga were of American and Mexican origin." Four toponyms will be extracted (called Geotagging) and grounded to their geographic coordinates (called Toponym Resolution). However, our research goes further than any previous work by showing how to distinguish the literal place(s) of the event (Spain, Malaga) from other linguistic types/uses such as nationalities (Mexican, American), improving downstream task accuracy. We consolidate and extend the Standard Evaluation Framework, discuss key research problems, then present concrete solutions in order to advance each stage of geoparsing. For geotagging, as well as training a SOTA neural Location-NER tagger, we simplify Metonymy Resolution with a novel minimalist feature extraction combined with an LSTM-based classifier, matching SOTA results. For toponym resolution, we deploy the latest deep learning methods to achieve SOTA performance by augmenting neural models with hitherto unused geographic features called Map Vectors. With each research project, we provide high-quality datasets and system prototypes, further building resources in this field. We then show how these geoparsing advances coupled with our proposed Intra-Document Analysis can be used to associate news articles with locations in order to monitor the spread of public health threats. To this end, we evaluate our research contributions with production data from a real-time downstream application to improve geolocation of news events for disease monitoring. The data was made available to us by the Joint Research Centre (JRC), which operates one such system called MediSys that processes incoming news articles in order to monitor threats to public health and make these available to a variety of governmental, business and non-profit organisations. We also discuss steps towards an end-to-end, automated news monitoring system and make actionable recommendations for future work. In summary, the thesis aims are twofold: (1) Generate original geoparsing research aimed at advancing each stage of the pipeline by addressing pertinent challenges with concrete solutions and actionable proposals. (2) Demonstrate how this research can be applied to news event monitoring to increase the efficacy of existing biosurveillance systems, e.g. European Commission’s MediSys.I was generously funded by DREAM CDT, which was funded by NERC of UKRI
Empirical studies on word representations
One of the most fundamental tasks in natural language processing is representing words with mathematical objects (such as vectors). The word representations, which are most often estimated from data, allow capturing the meaning of words. They enable comparing words according to their semantic similarity, and have been shown to work extremely well when included in complex real-world applications. A large part of our work deals with ways of estimating word representations directly from large quantities of text. Our methods exploit the idea that words which occur in similar contexts have a similar meaning. How we define the context is an important focus of our thesis. The context can consist of a number of words to the left and to the right of the word in question, but, as we show, obtaining context words via syntactic links (such as the link between the verb and its subject) often works better. We furthermore investigate word representations that accurately capture multiple meanings of a single word. We show that translation of a word in context contains information that can be used to disambiguate the meaning of that word
Automatic Context Pattern Generation for Entity Set Expansion
Entity Set Expansion (ESE) is a valuable task that aims to find entities of
the target semantic class described by given seed entities. Various NLP and IR
downstream applications have benefited from ESE due to its ability to discover
knowledge. Although existing bootstrapping methods have achieved great
progress, most of them still rely on manually pre-defined context patterns. A
non-negligible shortcoming of the pre-defined context patterns is that they
cannot be flexibly generalized to all kinds of semantic classes, and we call
this phenomenon as "semantic sensitivity". To address this problem, we devise a
context pattern generation module that utilizes autoregressive language models
(e.g., GPT-2) to automatically generate high-quality context patterns for
entities. In addition, we propose the GAPA, a novel ESE framework that
leverages the aforementioned GenerAted PAtterns to expand target entities.
Extensive experiments and detailed analyses on three widely used datasets
demonstrate the effectiveness of our method. All the codes of our experiments
will be available for reproducibility.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Enhancing knowledge acquisition systems with user generated and crowdsourced resources
This thesis is on leveraging knowledge acquisition systems with collaborative data and
crowdsourcing work from internet. We propose two strategies and apply them for building
effective entity linking and question answering (QA) systems.
The first strategy is on integrating an information extraction system with online collaborative
knowledge bases, such as Wikipedia and Freebase. We construct a Cross-Lingual Entity
Linking (CLEL) system to connect Chinese entities, such as people and locations, with
corresponding English pages in Wikipedia.
The main focus is to break the language barrier between Chinese entities and the English
KB, and to resolve the synonymy and polysemy of Chinese entities. To address those
problems, we create a cross-lingual taxonomy and a Chinese knowledge base (KB). We
investigate two methods of connecting the query representation with the KB representation.
Based on our CLEL system participating in TAC KBP 2011 evaluation, we finally propose
a simple and effective generative model, which achieved much better performance.
The second strategy is on creating annotation for QA systems with the help of crowd-
sourcing. Crowdsourcing is to distribute a task via internet and recruit a lot of people to
complete it simultaneously. Various annotated data are required to train the data-driven
statistical machine learning algorithms for underlying components in our QA system. This
thesis demonstrates how to convert the annotation task into crowdsourcing micro-tasks,
investigate different statistical methods for enhancing the quality of crowdsourced anno-
tation, and finally use enhanced annotation to train learning to rank models for passage
ranking algorithms for QA.Gegenstand dieser Arbeit ist das Nutzbarmachen sowohl von Systemen zur Wissener-
fassung als auch von kollaborativ erstellten Daten und Arbeit aus dem Internet. Es
werden zwei Strategien vorgeschlagen, welche für die Erstellung effektiver Entity Linking
(Disambiguierung von Entitätennamen) und Frage-Antwort Systeme eingesetzt werden.
Die erste Strategie ist, ein Informationsextraktions-System mit kollaborativ erstellten Online-
Datenbanken zu integrieren. Wir entwickeln ein Cross-Linguales Entity Linking-System
(CLEL), um chinesische Entitäten, wie etwa Personen und Orte, mit den entsprechenden
Wikipediaseiten zu verknüpfen.
Das Hauptaugenmerk ist es, die Sprachbarriere zwischen chinesischen Entitäten und
englischer Datenbank zu durchbrechen, und Synonymie und Polysemie der chinesis-
chen Entitäten aufzulösen. Um diese Probleme anzugehen, erstellen wir eine cross
linguale Taxonomie und eine chinesische Datenbank. Wir untersuchen zwei Methoden,
die Repräsentation der Anfrage und die Repräsentation der Datenbank zu verbinden.
Schließlich stellen wir ein einfaches und effektives generatives Modell vor, das auf unserem
System für die Teilnahme an der TAC KBP 2011 Evaluation basiert und eine erheblich
bessere Performanz erreichte.
Die zweite Strategie ist, Annotationen für Frage-Antwort-Systeme mit Hilfe von "Crowd-
sourcing" zu erstellen. "Crowdsourcing" bedeutet, eine Aufgabe via Internet an eine
große Menge an angeworbene Menschen zu verteilen, die diese simultan erledigen.
Verschiedene annotierte Daten sind notwendig, um die datengetriebenen statistischen
Lernalgorithmen zu trainieren, die unserem Frage-Antwort System zugrunde liegen. Wir
zeigen, wie die Annotationsaufgabe in Mikro-Aufgaben für das Crowdsourcing umgewan-
delt werden kann, wir untersuchen verschiedene statistische Methoden, um die Qualität
der Annotation aus dem Crowdsourcing zu erweitern, und schließlich nutzen wir die erwei-
erte Annotation, um Modelle zum Lernen von Ranglisten von Textabschnitten zu trainieren
- …