3,932 research outputs found
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing
Linguistic typology aims to capture structural and semantic variation across
the world's languages. A large-scale typology could provide excellent guidance
for multilingual Natural Language Processing (NLP), particularly for languages
that suffer from the lack of human labeled resources. We present an extensive
literature survey on the use of typological information in the development of
NLP techniques. Our survey demonstrates that to date, the use of information in
existing typological databases has resulted in consistent but modest
improvements in system performance. We show that this is due to both intrinsic
limitations of databases (in terms of coverage and feature granularity) and
under-employment of the typological features included in them. We advocate for
a new approach that adapts the broad and discrete nature of typological
categories to the contextual and continuous nature of machine learning
algorithms used in contemporary NLP. In particular, we suggest that such
approach could be facilitated by recent developments in data-driven induction
of typological knowledge
Cross-Lingual Adaptation for Type Inference
Deep learning-based techniques have been widely applied to the program
analysis tasks, in fields such as type inference, fault localization, and code
summarization. Hitherto deep learning-based software engineering systems rely
thoroughly on supervised learning approaches, which require laborious manual
effort to collect and label a prohibitively large amount of data. However, most
Turing-complete imperative languages share similar control- and data-flow
structures, which make it possible to transfer knowledge learned from one
language to another. In this paper, we propose cross-lingual adaptation of
program analysis, which allows us to leverage prior knowledge learned from the
labeled dataset of one language and transfer it to the others. Specifically, we
implemented a cross-lingual adaptation framework, PLATO, to transfer a deep
learning-based type inference procedure across weakly typed languages, e.g.,
Python to JavaScript and vice versa. PLATO incorporates a novel joint graph
kernelized attention based on abstract syntax tree and control flow graph, and
applies anchor word augmentation across different languages. Besides, by
leveraging data from strongly typed languages, PLATO improves the perplexity of
the backbone cross-programming-language model and the performance of downstream
cross-lingual transfer for type inference. Experimental results illustrate that
our framework significantly improves the transferability over the baseline
method by a large margin
Neural Unsupervised Domain Adaptation in NLP—A Survey
Deep neural networks excel at learning from labeled data and achieve
state-of-the-art results on a wide array of Natural Language Processing tasks.
In contrast, learning from unlabeled data, especially under domain shift,
remains a challenge. Motivated by the latest advances, in this survey we review
neural unsupervised domain adaptation techniques which do not require labeled
target domain data. This is a more challenging yet a more widely applicable
setup. We outline methods, from early approaches in traditional non-neural
methods to pre-trained model transfer. We also revisit the notion of domain,
and we uncover a bias in the type of Natural Language Processing tasks which
received most attention. Lastly, we outline future directions, particularly the
broader need for out-of-distribution generalization of future intelligent NLP
Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing
Linguistic typology aims to capture structural and semantic variation across the world’s languages. A large-scale typology could provide excellent guidance for multilingual Natural Language Processing (NLP), particularly for languages that suffer from the lack of human labeled resources. We present an extensive literature survey on the use of typological information in the development of NLP techniques. Our survey demonstrates that to date, the use of information in existing typological databases has resulted in consistent but modest improvements in system performance. We show that this is due to both intrinsic limitations of databases (in terms of coverage and feature granularity) and under-utilization of the typological features included in them. We advocate for a new approach that adapts the broad and discrete nature of typological categories to the contextual and continuous nature of machine learning algorithms used in contemporary NLP. In particular, we suggest that such an approach could be facilitated by recent developments in data-driven induction of typological knowledge.</jats:p
Explainable Misinformation Detection Across Multiple Social Media Platforms
In this work, the integration of two machine learning approaches, namely
domain adaptation and explainable AI, is proposed to address these two issues
of generalized detection and explainability. Firstly the Domain Adversarial
Neural Network (DANN) develops a generalized misinformation detector across
multiple social media platforms DANN is employed to generate the classification
results for test domains with relevant but unseen data. The DANN-based model, a
traditional black-box model, cannot justify its outcome, i.e., the labels for
the target domain. Hence a Local Interpretable Model-Agnostic Explanations
(LIME) explainable AI model is applied to explain the outcome of the DANN mode.
To demonstrate these two approaches and their integration for effective
explainable generalized detection, COVID-19 misinformation is considered a case
study. We experimented with two datasets, namely CoAID and MiSoVac, and
compared results with and without DANN implementation. DANN significantly
improves the accuracy measure F1 classification score and increases the
accuracy and AUC performance. The results obtained show that the proposed
framework performs well in the case of domain shift and can learn
domain-invariant features while explaining the target labels with LIME
implementation enabling trustworthy information processing and extraction to
combat misinformation effectively.Comment: 28 pages,4 figure
A Survey of Cross-Lingual Sentiment Analysis Based on Pre-Trained Models
With the technology development of natural language processing, many researchers have studied Machine Learning (ML), Deep Learning (DL), monolingual Sentiment Analysis (SA) widely. However, there is not much work on Cross-Lingual SA (CLSA), although it is beneficial when dealing with low resource languages (e.g., Tamil, Malayalam, Hindi, and Arabic). This paper surveys the main challenges and issues of CLSA based on some pre-trained language models and mentions the leading methods to cope with CLSA. In particular, we compare and analyze their pros and cons. Moreover, we summarize the valuable cross-lingual resources and point out the main problems researchers need to solve in the future
Example-based Hypernetworks for Out-of-Distribution Generalization
As Natural Language Processing (NLP) algorithms continually achieve new
milestones, out-of-distribution generalization remains a significant challenge.
This paper addresses the issue of multi-source adaptation for unfamiliar
domains: We leverage labeled data from multiple source domains to generalize to
unknown target domains at training. Our innovative framework employs
example-based Hypernetwork adaptation: a T5 encoder-decoder initially generates
a unique signature from an input example, embedding it within the source
domains' semantic space. This signature is subsequently utilized by a
Hypernetwork to generate the task classifier's weights. We evaluated our method
across two tasks - sentiment classification and natural language inference - in
29 adaptation scenarios, where it outpaced established algorithms. In an
advanced version, the signature also enriches the input example's
representation. We also compare our finetuned architecture to few-shot GPT-3,
demonstrating its effectiveness in essential use cases. To our knowledge, this
marks the first application of Hypernetworks to the adaptation for unknown
domains.Comment: First two authors contributed equally to this work. Our code and data
are available at: https://github.com/TomerVolk/Hyper-PAD
- …