1,708 research outputs found
Automatic generation of named entity taggers leveraging parallel corpora
The lack of hand curated data is a major impediment to developing statistical semantic
processors for many of the world languages. A major issue of semantic processors in Nat-
ural Language Processing (NLP) is that they require manually annotated data to perform
accurately. Our work aims to address this issue by leveraging existing annotations and
semantic processors from multiple source languages by projecting their annotations via
statistical word alignments traditionally used in Machine Translation. Taking the Named
Entity Recognition (NER) task as a use case of semantic processing, this work presents
a method to automatically induce Named Entity taggers using parallel data, without any
manual intervention. Our method leverages existing semantic processors and annotations
to overcome the lack of annotation data for a given language. The intuition is to transfer
or project semantic annotations, from multiple sources to a target language, by statistical
word alignment methods applied to parallel texts (Och and Ney, 2000; Liang et al., 2006).
The projected annotations can then be used to automatically generate semantic processors
for the target language. In this way we would be able to provide NLP processors with-
out training data for the target language. The experiments are focused on 4 languages:
German, English, Spanish and Italian, and our empirical evaluation results show that our
method obtains competitive results when compared with models trained on gold-standard
out-of-domain data. This shows that our projection algorithm is effective to transport NER
annotations across languages via parallel data thus providing a fully automatic method to
obtain NER taggers for as many as the number of languages aligned via parallel corpora
YATO: Yet Another deep learning based Text analysis Open toolkit
We introduce YATO, an open-source, easy-to-use toolkit for text analysis with
deep learning. Different from existing heavily engineered toolkits and
platforms, YATO is lightweight and user-friendly for researchers from
cross-disciplinary areas. Designed in a hierarchical structure, YATO supports
free combinations of three types of widely used features including 1)
traditional neural networks (CNN, RNN, etc.); 2) pre-trained language models
(BERT, RoBERTa, ELECTRA, etc.); and 3) user-customized neural features via a
simple configurable file. Benefiting from the advantages of flexibility and
ease of use, YATO can facilitate fast reproduction and refinement of
state-of-the-art NLP models, and promote the cross-disciplinary applications of
NLP techniques. The code, examples, and documentation are publicly available at
https://github.com/jiesutd/YATO. A demo video is also available at
https://youtu.be/tSjjf5BzfQg
A Survey on Arabic Named Entity Recognition: Past, Recent Advances, and Future Trends
As more and more Arabic texts emerged on the Internet, extracting important
information from these Arabic texts is especially useful. As a fundamental
technology, Named entity recognition (NER) serves as the core component in
information extraction technology, while also playing a critical role in many
other Natural Language Processing (NLP) systems, such as question answering and
knowledge graph building. In this paper, we provide a comprehensive review of
the development of Arabic NER, especially the recent advances in deep learning
and pre-trained language model. Specifically, we first introduce the background
of Arabic NER, including the characteristics of Arabic and existing resources
for Arabic NER. Then, we systematically review the development of Arabic NER
methods. Traditional Arabic NER systems focus on feature engineering and
designing domain-specific rules. In recent years, deep learning methods achieve
significant progress by representing texts via continuous vector
representations. With the growth of pre-trained language model, Arabic NER
yields better performance. Finally, we conclude the method gap between Arabic
NER and NER methods from other languages, which helps outline future directions
for Arabic NER.Comment: Accepted by IEEE TKD
Automatic generation of named entity taggers leveraging parallel corpora
The lack of hand curated data is a major impediment to developing statistical semantic
processors for many of the world languages. A major issue of semantic processors in Nat-
ural Language Processing (NLP) is that they require manually annotated data to perform
accurately. Our work aims to address this issue by leveraging existing annotations and
semantic processors from multiple source languages by projecting their annotations via
statistical word alignments traditionally used in Machine Translation. Taking the Named
Entity Recognition (NER) task as a use case of semantic processing, this work presents
a method to automatically induce Named Entity taggers using parallel data, without any
manual intervention. Our method leverages existing semantic processors and annotations
to overcome the lack of annotation data for a given language. The intuition is to transfer
or project semantic annotations, from multiple sources to a target language, by statistical
word alignment methods applied to parallel texts (Och and Ney, 2000; Liang et al., 2006).
The projected annotations can then be used to automatically generate semantic processors
for the target language. In this way we would be able to provide NLP processors with-
out training data for the target language. The experiments are focused on 4 languages:
German, English, Spanish and Italian, and our empirical evaluation results show that our
method obtains competitive results when compared with models trained on gold-standard
out-of-domain data. This shows that our projection algorithm is effective to transport NER
annotations across languages via parallel data thus providing a fully automatic method to
obtain NER taggers for as many as the number of languages aligned via parallel corpora
A Survey on Semantic Processing Techniques
Semantic processing is a fundamental research domain in computational
linguistics. In the era of powerful pre-trained language models and large
language models, the advancement of research in this domain appears to be
decelerating. However, the study of semantics is multi-dimensional in
linguistics. The research depth and breadth of computational semantic
processing can be largely improved with new technologies. In this survey, we
analyzed five semantic processing tasks, e.g., word sense disambiguation,
anaphora resolution, named entity recognition, concept extraction, and
subjectivity detection. We study relevant theoretical research in these fields,
advanced methods, and downstream applications. We connect the surveyed tasks
with downstream applications because this may inspire future scholars to fuse
these low-level semantic processing tasks with high-level natural language
processing tasks. The review of theoretical research may also inspire new tasks
and technologies in the semantic processing domain. Finally, we compare the
different semantic processing techniques and summarize their technical trends,
application trends, and future directions.Comment: Published at Information Fusion, Volume 101, 2024, 101988, ISSN
1566-2535. The equal contribution mark is missed in the published version due
to the publication policies. Please contact Prof. Erik Cambria for detail
- …