515 research outputs found
Character-level and syntax-level models for low-resource and multilingual natural language processing
There are more than 7000 languages in the world, but only a small portion of them benefit from Natural Language Processing resources and models. Although languages generally present different characteristics, “cross-lingual bridges” can be exploited, such as transliteration signals and word alignment links. Such information, together with the availability of multiparallel corpora and the urge to overcome language barriers, motivates us to build models that represent more of the world’s languages.
This thesis investigates cross-lingual links for improving the processing of low-resource languages with language-agnostic models at the character and syntax level. Specifically, we propose to (i) use orthographic similarities and transliteration between Named Entities and rare words in different languages to improve the construction of Bilingual Word Embeddings (BWEs) and named entity resources, and (ii) exploit multiparallel corpora for projecting labels from high- to low-resource languages, thereby gaining access to weakly supervised processing methods for the latter.
In the first publication, we describe our approach for improving the translation of rare words and named entities for the Bilingual Dictionary Induction (BDI) task, using orthography and transliteration information. In our second work, we tackle BDI by enriching BWEs with orthography embeddings and a number of other features, using our classification-based system to overcome script differences among languages. The third publication describes cheap cross-lingual signals that should be considered when building mapping approaches for BWEs since they are simple to extract, effective for bootstrapping the mapping of BWEs, and overcome the failure of unsupervised methods. The fourth paper shows our approach for extracting a named entity resource for 1340 languages, including very low-resource languages from all major areas of linguistic diversity. We exploit parallel corpus statistics and transliteration models and obtain improved performance over prior work. Lastly, the fifth work models annotation projection as a graph-based label propagation problem for the part of speech tagging task. Part of speech models trained on our labeled sets outperform prior work for low-resource languages like Bambara (an African language spoken in Mali), Erzya (a Uralic language spoken in Russia’s Republic of Mordovia), Manx (the Celtic language of the Isle of Man), and Yoruba (a Niger-Congo language spoken in Nigeria and surrounding countries)
Analyzing Handwritten and Transcribed Symbols in Disparate Corpora
Cuneiform tablets appertain to the oldest textual artifacts used for more than
three millennia and are comparable in amount and relevance
to texts written in Latin or ancient Greek.
These tablets are typically found in the Middle East and were
written by imprinting wedge-shaped impressions into wet clay.
Motivated by the increased demand for computerized analysis of documents within
the Digital Humanities, we develop the foundation for quantitative processing
of cuneiform script.
Using a 3D-Scanner to acquire a cuneiform tablet or manually creating line
tracings are two completely different representations of the same type of text
source. Each representation is typically processed with its own tool-set and
the textual analysis is therefore limited to a certain type of digital
representation. To homogenize these data source a unifying minimal wedge
feature description is introduced. It is extracted by
pattern matching and subsequent conflict resolution
as cuneiform is written densely with highly overlapping wedges.
Similarity metrics for cuneiform signs based on distinct
assumptions are presented. (i) An implicit model represents cuneiform signs
using undirected mathematical graphs and measures the similarity of
signs with graph kernels.
(ii) An explicit model approaches the problem of recognition by an optimal
assignment between the wedge configurations of two signs.
Further, methods for spotting cuneiform script are developed, combining
the feature descriptors for cuneiform wedges with prior work on
segmentation-free word spotting using part-structured models.
The ink-ball model is adapted by treating wedge feature descriptors as
individual parts.
The similarity metrics and the adapted spotting model are both evaluated
on a real-world dataset outperforming the state-of-the-art in
cuneiform sign similarity and spotting.
To prove the applicability of these methods for computational cuneiform
analysis, a novel approach is presented for mining frequent
constellations of wedges resulting in spatial n-grams. Furthermore,
a method for automatized transliteration of tablets is evaluated by
employing structured and sequential learning on a dataset of
parallel sentences. Finally, the conclusion
outlines how the presented methods enable the development of new tools
and computational analyses, which are objective and reproducible,
for quantitative processing of cuneiform script
Evaluating Scoped Meaning Representations
Semantic parsing offers many opportunities to improve natural language
understanding. We present a semantically annotated parallel corpus for English,
German, Italian, and Dutch where sentences are aligned with scoped meaning
representations in order to capture the semantics of negation, modals,
quantification, and presupposition triggers. The semantic formalism is based on
Discourse Representation Theory, but concepts are represented by WordNet
synsets and thematic roles by VerbNet relations. Translating scoped meaning
representations to sets of clauses enables us to compare them for the purpose
of semantic parser evaluation and checking translations. This is done by
computing precision and recall on matching clauses, in a similar way as is done
for Abstract Meaning Representations. We show that our matching tool for
evaluating scoped meaning representations is both accurate and efficient.
Applying this matching tool to three baseline semantic parsers yields F-scores
between 43% and 54%. A pilot study is performed to automatically find changes
in meaning by comparing meaning representations of translations. This
comparison turns out to be an additional way of (i) finding annotation mistakes
and (ii) finding instances where our semantic analysis needs to be improved.Comment: Camera-ready for LREC 201
Exploiting Cross-Lingual Representations For Natural Language Processing
Traditional approaches to supervised learning require a generous amount of labeled data for good generalization. While such annotation-heavy approaches have proven useful for some Natural Language Processing (NLP) tasks in high-resource languages (like English), they are unlikely to scale to languages where collecting labeled data is di cult and time-consuming. Translating supervision available in English is also not a viable solution, because developing a good machine translation system requires expensive to annotate resources which are not available for most languages.
In this thesis, I argue that cross-lingual representations are an effective means of extending NLP tools to languages beyond English without resorting to generous amounts of annotated data or expensive machine translation. These representations can be learned in an inexpensive manner, often from signals completely unrelated to the task of interest. I begin with a review of different ways of inducing such representations using a variety of cross-lingual signals and study algorithmic approaches of using them in a diverse set of downstream tasks. Examples of such tasks covered in this thesis include learning representations to transfer a trained model across languages for document classification, assist in monolingual lexical semantics like word sense induction, identify asymmetric lexical relationships like hypernymy between words in different languages, or combining supervision across languages through a shared feature space for cross-lingual entity linking. In all these applications, the representations make information expressed in other languages available in English, while requiring minimal additional supervision in the language of interest
Improving Data Quality in Customer Relationship Management Systems: a method for cleaning personal information
openIn recent years, data literacy has become critical in areas such as marketing, sales, and more generally in businesses. These companies rely on software such as Customer Relationship Management (CRM) systems to derive useful information from the vast amount of data collected. However, lack of data quality undermines the effectiveness of this approach, as it directly impacts overall business performance. This thesis investigates the various issues and challenges related to data quality in CRM systems, focusing particularly on datasets with attributes such as first name, last name, and e-mail address. In addition, an algorithm for cleaning such datasets is proposed.In recent years, data literacy has become critical in areas such as marketing, sales, and more generally in businesses. These companies rely on software such as Customer Relationship Management (CRM) systems to derive useful information from the vast amount of data collected. However, lack of data quality undermines the effectiveness of this approach, as it directly impacts overall business performance. This thesis investigates the various issues and challenges related to data quality in CRM systems, focusing particularly on datasets with attributes such as first name, last name, and e-mail address. In addition, an algorithm for cleaning such datasets is proposed
SilverAlign: MT-Based Silver Data Algorithm For Evaluating Word Alignment
Word alignments are essential for a variety of NLP tasks. Therefore, choosing
the best approaches for their creation is crucial. However, the scarce
availability of gold evaluation data makes the choice difficult. We propose
SilverAlign, a new method to automatically create silver data for the
evaluation of word aligners by exploiting machine translation and minimal
pairs. We show that performance on our silver data correlates well with gold
benchmarks for 9 language pairs, making our approach a valid resource for
evaluation of different domains and languages when gold data are not available.
This addresses the important scenario of missing gold data alignments for
low-resource languages
- …