134 research outputs found
Optimal Transport Posterior Alignment for Cross-lingual Semantic Parsing
Cross-lingual semantic parsing transfers parsing capability from a
high-resource language (e.g., English) to low-resource languages with scarce
training data. Previous work has primarily considered silver-standard data
augmentation or zero-shot methods, however, exploiting few-shot gold data is
comparatively unexplored. We propose a new approach to cross-lingual semantic
parsing by explicitly minimizing cross-lingual divergence between probabilistic
latent variables using Optimal Transport. We demonstrate how this direct
guidance improves parsing from natural languages using fewer examples and less
training. We evaluate our method on two datasets, MTOP and MultiATIS++SQL,
establishing state-of-the-art results under a few-shot cross-lingual regime.
Ablation studies further reveal that our method improves performance even
without parallel input translations. In addition, we show that our model better
captures cross-lingual structure in the latent space to improve semantic
representation similarity.Comment: Accepted to TACL 2023. Pre-MIT Press publication. 17 pages, 3
figures, 6 table
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural
language processing (NLP). Yet, what `good generalisation' entails and how it
should be evaluated is not well understood, nor are there any common standards
to evaluate it. In this paper, we aim to lay the ground-work to improve both of
these issues. We present a taxonomy for characterising and understanding
generalisation research in NLP, we use that taxonomy to present a comprehensive
map of published generalisation studies, and we make recommendations for which
areas might deserve attention in the future. Our taxonomy is based on an
extensive literature review of generalisation research, and contains five axes
along which studies can differ: their main motivation, the type of
generalisation they aim to solve, the type of data shift they consider, the
source by which this data shift is obtained, and the locus of the shift within
the modelling pipeline. We use our taxonomy to classify over 400 previous
papers that test generalisation, for a total of more than 600 individual
experiments. Considering the results of this review, we present an in-depth
analysis of the current state of generalisation research in NLP, and make
recommendations for the future. Along with this paper, we release a webpage
where the results of our review can be dynamically explored, and which we
intend to up-date as new NLP generalisation studies are published. With this
work, we aim to make steps towards making state-of-the-art generalisation
testing the new status quo in NLP.Comment: 35 pages of content + 53 pages of reference
Dialogue Agents 101: A Beginner's Guide to Critical Ingredients for Designing Effective Conversational Systems
Sharing ideas through communication with peers is the primary mode of human
interaction. Consequently, extensive research has been conducted in the area of
conversational AI, leading to an increase in the availability and diversity of
conversational tasks, datasets, and methods. However, with numerous tasks being
explored simultaneously, the current landscape of conversational AI becomes
fragmented. Therefore, initiating a well-thought-out model for a dialogue agent
can pose significant challenges for a practitioner. Towards highlighting the
critical ingredients needed for a practitioner to design a dialogue agent from
scratch, the current study provides a comprehensive overview of the primary
characteristics of a dialogue agent, the supporting tasks, their corresponding
open-domain datasets, and the methods used to benchmark these datasets. We
observe that different methods have been used to tackle distinct dialogue
tasks. However, building separate models for each task is costly and does not
leverage the correlation among the several tasks of a dialogue agent. As a
result, recent trends suggest a shift towards building unified foundation
models. To this end, we propose UNIT, a UNified dIalogue dataseT constructed
from conversations of existing datasets for different dialogue tasks capturing
the nuances for each of them. We also examine the evaluation strategies used to
measure the performance of dialogue agents and highlight the scope for future
research in the area of conversational AI.Comment: 21 pages, 3 figures, 3 table
Japanese word prediction
This report deals with the implementation of a Japanese word prediction engine written by the author. As this type of software does not seem to exist for Japanese at the time of writing, it could prove useful in Japanese augmentative and alternative communication (AAC) as a software tool used to improve typing speed and reduce the amount of keystrokes needed to produce text. Word prediction, in contrast to the word completion software commonly found in mobile phones and word processor intellisense engines etc. is a technique for suggesting a followup word after a word has just been completed. This is usually done by providing a list of the most probable words to the user, sorted by commonality (general and user-specific frequency). Combined with good word completion software and a responsive user interface, word prediction is one of the most powerful assistive tools available to movement impaired users today. The main goals of the thesis will be to: 1. Answer as many of the questions raised by the language differences as possible. 2. Investigate further avenues of research in the subject. 3. Make a functional word prediction prototype for Japanese. All project code is in the public domain and is currently hosted at: http://www.mediafire.com/?rrhqtqsgp6ei6m
- …