2,026 research outputs found
Combining Visual and Textual Features for Semantic Segmentation of Historical Newspapers
The massive amounts of digitized historical documents acquired over the last
decades naturally lend themselves to automatic processing and exploration.
Research work seeking to automatically process facsimiles and extract
information thereby are multiplying with, as a first essential step, document
layout analysis. If the identification and categorization of segments of
interest in document images have seen significant progress over the last years
thanks to deep learning techniques, many challenges remain with, among others,
the use of finer-grained segmentation typologies and the consideration of
complex, heterogeneous documents such as historical newspapers. Besides, most
approaches consider visual features only, ignoring textual signal. In this
context, we introduce a multimodal approach for the semantic segmentation of
historical newspapers that combines visual and textual features. Based on a
series of experiments on diachronic Swiss and Luxembourgish newspapers, we
investigate, among others, the predictive power of visual and textual features
and their capacity to generalize across time and sources. Results show
consistent improvement of multimodal models in comparison to a strong visual
baseline, as well as better robustness to high material variance
Pattern based fact extraction from Estonian free-texts
Vabatekstide töötlus on üks keerulisemaid probleeme arvutiteaduses. Tekstide täpne analüüs on tihti mitmestimõistetavuse tõttu arvutite jaoks keeruline või võimatu. Sellegipoolest on võimalik teatud fakte eraldada. Käesolevas töös uurime mustripõhiseid meetodeid faktide tuletamiseks eesti keelsetest tekstidest. Rakendame oma metoodikat reaalsetel tekstidel ning analüüsime tulemusi. Kirjeldame lühidalt aktiivõppe metoodikat, mis võimaldab suuri korpuseid kiiremini märgendada. Lisaks oleme implementeerinud prototüüplahenduse korpuste märgendamiseks ning mustripõhise faktituletuse läbiviimiseks.Natural language processing is one of the most difficult problems, since words and language constructions have often ambiguous meaning that cannot be resolved without extensive cultural background. However, some facts are easier to deduce than the others. In this work, we consider unary, binary and ternary relations between the words that can be deduced form a single sentence. The relations represented by sets of patterns are combined with basic machine learning methods, that are used to train and deploy patterns for fact extraction. We also describe the process of active learning, which helps to speed up annotating relations in large corpora. Other contributions include a prototype implementation with plain-text preprocessor, corpus annotator, pattern miner and fact extractor. Additionally, we provide empirical study about the efficiency of the prototype implementation with several relations and corpora
Effective Instance Matching for Heterogeneous Structured Data
One main problem towards the effective usage of structured data is instance matching, where the goal is to find instance representations referring to the same real-world thing. In this book we investigate how to effectively match Heterogeneous structured data. We evaluate our approaches against the latest baselines. The results show advances beyond the state-of-the-art
Transformer-based Subject Entity Detection in Wikipedia Listings
In tasks like question answering or text summarisation, it is essential to
have background knowledge about the relevant entities. The information about
entities - in particular, about long-tail or emerging entities - in publicly
available knowledge graphs like DBpedia or CaLiGraph is far from complete. In
this paper, we present an approach that exploits the semi-structured nature of
listings (like enumerations and tables) to identify the main entities of the
listing items (i.e., of entries and rows). These entities, which we call
subject entities, can be used to increase the coverage of knowledge graphs. Our
approach uses a transformer network to identify subject entities at the
token-level and surpasses an existing approach in terms of performance while
being bound by fewer limitations. Due to a flexible input format, it is
applicable to any kind of listing and is, unlike prior work, not dependent on
entity boundaries as input. We demonstrate our approach by applying it to the
complete Wikipedia corpus and extracting 40 million mentions of subject
entities with an estimated precision of 71% and recall of 77%. The results are
incorporated in the most recent version of CaLiGraph.Comment: Published at Deep Learning for Knowledge Graphs workshop (DL4KG) at
International Semantic Web Conference 2022 (ISWC 2022
Combining Visual and Textual Features for Semantic Segmentation of Historical Newspapers
The massive amounts of digitized historical documents acquired over the last decades naturally lend themselves to automatic processing and exploration. Research work seeking to automatically process facsimiles and extract information thereby are multiplying with, as a first essential step, document layout analysis. If the identification and categorization of segments of interest in document images have seen significant progress over the last years thanks to deep learning techniques, many challenges remain with, among others, the use of finer-grained segmentation typologies and the consideration of complex, heterogeneous documents such as historical newspapers. Besides, most approaches consider visual features only, ignoring textual signal. In this context, we introduce a multimodal approach for the semantic segmentation of historical newspapers that combines visual and textual features. Based on a series of experiments on diachronic Swiss and Luxembourgish newspapers, we investigate, among others, the predictive power of visual and textual features and their capacity to generalize across time and sources. Results show consistent improvement of multimodal models in comparison to a strong visual baseline, as well as better robustness to high material variance
Recommended from our members
Where are you talking about? Advances and Challenges of Geographic Analysis of Text with Application to Disease Monitoring
The Natural Language Processing task we focus on in this thesis is Geoparsing. Geoparsing is the process of extraction and grounding of toponyms (place names). Consider this sentence: "The victims of the Spanish earthquake off the coast of Malaga were of American and Mexican origin." Four toponyms will be extracted (called Geotagging) and grounded to their geographic coordinates (called Toponym Resolution). However, our research goes further than any previous work by showing how to distinguish the literal place(s) of the event (Spain, Malaga) from other linguistic types/uses such as nationalities (Mexican, American), improving downstream task accuracy. We consolidate and extend the Standard Evaluation Framework, discuss key research problems, then present concrete solutions in order to advance each stage of geoparsing. For geotagging, as well as training a SOTA neural Location-NER tagger, we simplify Metonymy Resolution with a novel minimalist feature extraction combined with an LSTM-based classifier, matching SOTA results. For toponym resolution, we deploy the latest deep learning methods to achieve SOTA performance by augmenting neural models with hitherto unused geographic features called Map Vectors. With each research project, we provide high-quality datasets and system prototypes, further building resources in this field. We then show how these geoparsing advances coupled with our proposed Intra-Document Analysis can be used to associate news articles with locations in order to monitor the spread of public health threats. To this end, we evaluate our research contributions with production data from a real-time downstream application to improve geolocation of news events for disease monitoring. The data was made available to us by the Joint Research Centre (JRC), which operates one such system called MediSys that processes incoming news articles in order to monitor threats to public health and make these available to a variety of governmental, business and non-profit organisations. We also discuss steps towards an end-to-end, automated news monitoring system and make actionable recommendations for future work. In summary, the thesis aims are twofold: (1) Generate original geoparsing research aimed at advancing each stage of the pipeline by addressing pertinent challenges with concrete solutions and actionable proposals. (2) Demonstrate how this research can be applied to news event monitoring to increase the efficacy of existing biosurveillance systems, e.g. European Commission’s MediSys.I was generously funded by DREAM CDT, which was funded by NERC of UKRI
- …