136 research outputs found
Encoders Help You Disambiguate Word Senses in Neural Machine Translation
Neural machine translation (NMT) has achieved new state-of-the-art performance in translating ambiguous words. However, it is still unclear which component dominates the process of disambiguation. In this paper, we explore the ability of NMT encoders and decoders to disambiguate word senses by evaluating hidden states and investigating the distributions of self-attention. We train a classifier to predict whether a translation is correct given the representation of an ambiguous noun. We find that encoder hidden states outperform word embeddings significantly which indicates that encoders adequately encode relevant information for disambiguation into hidden states. In contrast to encoders, the effect of decoder is different in models with different architectures. Moreover, the attention weights and attention entropy show that self-attention can detect ambiguous nouns and distribute more attention to the context
Code-Switching with Word Senses for Pretraining in Neural Machine Translation
Lexical ambiguity is a significant and pervasive challenge in Neural Machine
Translation (NMT), with many state-of-the-art (SOTA) NMT systems struggling to
handle polysemous words (Campolungo et al., 2022). The same holds for the NMT
pretraining paradigm of denoising synthetic "code-switched" text (Pan et al.,
2021; Iyer et al., 2023), where word senses are ignored in the noising stage --
leading to harmful sense biases in the pretraining data that are subsequently
inherited by the resulting models. In this work, we introduce Word Sense
Pretraining for Neural Machine Translation (WSP-NMT) - an end-to-end approach
for pretraining multilingual NMT models leveraging word sense-specific
information from Knowledge Bases. Our experiments show significant improvements
in overall translation quality. Then, we show the robustness of our approach to
scale to various challenging data and resource-scarce scenarios and, finally,
report fine-grained accuracy improvements on the DiBiMT disambiguation
benchmark. Our studies yield interesting and novel insights into the merits and
challenges of integrating word sense information and structured knowledge in
multilingual pretraining for NMT.Comment: EMNLP (Findings) 2023 Long Pape
Towards Effective Disambiguation for Machine Translation with Large Language Models
Resolving semantic ambiguity has long been recognised as a central challenge in the field of Machine Translation. Recent work on benchmarking translation performance on ambiguous sentences has exposed the limitations of conventional Neural Machine Translation (NMT) systems, which fail to handle many such cases. Large language models (LLMs) have emerged as a promising alternative, demonstrating comparable performance to traditional NMT models while introducing new paradigms for controlling the target outputs. In this paper, we study the capabilities of LLMs to translate ``ambiguous sentences'' - i.e. those containing highly polysemous words and/or rare word senses. We also propose two ways to improve their disambiguation capabilities, through a) in-context learning and b) fine-tuning on carefully curated ambiguous datasets. Experiments show that our methods can match or outperform state-of-the-art systems such as DeepL and NLLB in four out of five language directions. Our research provides valuable insights into effectively adapting LLMs to become better disambiguators during Machine Translation. We release our curated disambiguation corpora and resources at https://data.statmt.org/ambiguous-europarl
Towards Effective Disambiguation for Machine Translation with Large Language Models
Resolving semantic ambiguity has long been recognised as a central challenge
in the field of Machine Translation. Recent work on benchmarking translation
performance on ambiguous sentences has exposed the limitations of conventional
Neural Machine Translation (NMT) systems, which fail to handle many such cases.
Large language models (LLMs) have emerged as a promising alternative,
demonstrating comparable performance to traditional NMT models while
introducing new paradigms for controlling the target outputs. In this paper, we
study the capabilities of LLMs to translate "ambiguous sentences" - i.e. those
containing highly polysemous words and/or rare word senses. We also propose two
ways to improve their disambiguation capabilities, through a) in-context
learning and b) fine-tuning on carefully curated ambiguous datasets.
Experiments show that our methods can match or outperform state-of-the-art
systems such as DeepL and NLLB in four out of five language directions. Our
research provides valuable insights into effectively adapting LLMs to become
better disambiguators during Machine Translation. We release our curated
disambiguation corpora and resources at
https://data.statmt.org/ambiguous-europarl.Comment: WMT 202
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating Finnish into English
Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigation into pure character-based models in the case of translating Finnish into English, including exploring the ability to learn word senses and morphological inflections and the attention mechanism. We demonstrate that word-level information is distributed over the entire character sequence rather than over a single character, and characters at different positions play different roles in learning linguistic knowledge. In addition, character-based models need more layers to encode word senses which explains why only deeper models outperform subword-based models. The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information. Experimental results show that the word-level attention with a single head results in 1.2 BLEU points drop
Word Sense Disambiguation for clinical abbreviations
Abbreviations are extensively used in electronic health records (EHR) of patients as
well as medical documentation, reaching 30-50% of the words in clinical narrative. There
are more than 197,000 unique medical abbreviations found in the clinical text and their
meanings vary depending on the context in which they are used. Since data in electronic
health records could be shareable across health information systems (hospitals, primary care centers, etc.) as well as others such as insurance companies information systems, it is essential determining the correct meaning of the abbreviations to avoid misunderstandings.
Clinical abbreviations have specific characteristic that do not follow any standard
rules for creating them. This makes it complicated to find said abbreviations and corresponding meanings. Furthermore, there is an added difficulty to working with clinical data due to privacy reasons, since it is essential to have them in order to develop and test algorithms.
Word sense disambiguation (WSD) is an essential task in natural language processing
(NLP) applications such as information extraction, chatbots and summarization systems
among others. WSD aims to identify the correct meaning of the ambiguous word which
has more than one meaning. Disambiguating clinical abbreviations is a type of lexical
sample WSD task. Previous research works adopted supervised, unsupervised and
Knowledge-based (KB) approaches to disambiguate clinical abbreviations. This thesis
aims to propose a classification model that apart from disambiguating well known abbreviations also disambiguates rare and unseen abbreviations using the most recent deep neural network architectures for language modeling.
In clinical abbreviation disambiguation several resources and disambiguation models
were encountered. Different classification approaches used to disambiguate the clinical
abbreviations were investigated in this thesis. Considering that computers do not directly
understand texts, different data representations were implemented to capture the meaning of the words. Since it is also necessary to measure the performance of algorithms, the evaluation measurements used are discussed.
As the different solutions proposed to clinical WSD we have explored static word embeddings data representation on 13 English clinical abbreviations of the UMN data set (from University of Minnesota) by testing traditional supervised machine learning algorithms separately for each abbreviation. Moreover, we have utilized a transformer-base pretrained model that was fine-tuned as a multi-classification classifier for the whole data set (75 abbreviations of the UMN data set). The aim of implementing just one multi-class classifier is to predict rare and unseen abbreviations that are most common in clinical
narrative. Additionally, other experiments were conducted for a different type of abbreviations
(scientific abbreviations and acronyms) by defining a hybrid approach composed
of supervised and knowledge-based approaches.
Most previous works tend to build a separated classifier for each clinical abbreviation,
tending to leverage different data resources to overcome the data acquisition bottleneck.
However, those models were restricted to disambiguate terms that have been
seen in trained data. Meanwhile, based on our results, transfer learning by fine-tuning a
transformer-based model could predict rare and unseen abbreviations. A remaining challenge for future work is to improve the model to automate the disambiguation of clinical abbreviations on run-time systems by implementing self-supervised learning models.Las abreviaturas se utilizan ampliamente en las historias clínicas electrónicas de los
pacientes y en mucha documentación médica, llegando a ser un 30-50% de las palabras empleadas en narrativa clínica. Existen más de 197.000 abreviaturas únicas usadas en textos clínicos siendo términos altamente ambiguos El significado de las abreviaturas varía en función del contexto en el que se utilicen. Dado que los datos de las historias clínicas electrónicas pueden compartirse entre servicios, hospitales, centros de atención primaria así como otras organizaciones como por ejemplo, las compañías de seguros es fundamental determinar el significado correcto de las abreviaturas para evitar además eventos adversos relacionados con la seguridad del paciente. Nuevas abreviaturas clínicas aparecen constantemente y tienen la característica específica de que no siguen ningún estándar para su creación. Esto hace que sea muy difícil disponer de un recurso con todas las abreviaturas y todos sus significados. A todo esto hay que añadir la dificultad para trabajar con datos clínicos por cuestiones de privacidad cuando es esencial disponer de ellos para poder desarrollar algoritmos para su tratamiento.
La desambiguación del sentido de las palabras (WSD, en inglés) es una tarea esencial
en tareas de procesamiento del lenguaje natural (PLN) como extracción de información,
chatbots o generadores de resúmenes, entre otros. WSD tiene como objetivo identificar
el significado correcto de una palabra ambigua (que tiene más de un significado). Esta
tarea se ha abordado previamente utilizando tanto enfoques supervisados, no supervisados así como basados en conocimiento. Esta tesis tiene como objetivo definir un modelo de clasificación que además de desambiguar abreviaturas conocidas desambigüe también abreviaturas menos frecuentes que no han aparecido previamente en los conjuntos de entrenaminto utilizando las arquitecturas de redes neuronales profundas más recientes relacionadas ocn los modelos del lenguaje.
En la desambiguación de abreviaturas clínicas se emplean diversos recursos y modelos de desambiguación. Se han investigado los diferentes enfoques de clasificación utilizados para desambiguar las abreviaturas clínicas. Dado que un ordenador no comprende directamente los textos, se han implementado diferentes representaciones de textos para capturar el significado de las palabras. Puesto que también es necesario medir el desempeño de cualquier algoritmo, se describen también las medidas de evaluación utilizadas.
La mayoría de los trabajos previos se han basado en la construcción de un clasificador
separado para cada abreviatura clínica. De este modo, tienden a aprovechar diferentes
recursos de datos para superar el cuello de botella de la adquisición de datos. Sin embargo, estos modelos se limitaban a desambiguar con los datos para los que el sistema había sido entrenado.
Se han explorado además representaciones basadas vectores de palabras (word embeddings) estáticos para 13 abreviaturas clínicas en el corpus UMN en inglés (de la University of Minnesota) utilizando algoritmos de clasificación tradicionales de aprendizaje automático supervisados (un clasificador por cada abreviatura). Se ha llevado a cabo un segundo experimento utilizando un modelo multi-clasificador sobre todo el conjunto de las 75 abreviaturas del corpus UMN basado en un modelo Transformer pre-entrenado. El objetivo ha sido implementar un clasificador multiclase para predecir también abreviaturas raras y no vistas. Se realizó un experimento adicional para siglas científicas en documentos de dominio abierto mediante la aplicación de un enfoque híbrido compuesto por enfoques supervisados y basados en el conocimiento.
Así, basándonos en los resultados de esta tesis, el aprendizaje por transferencia (transfer learning) mediante el ajuste (fine-tuning) de un modelo de lenguaje preentrenado podría predecir abreviaturas raras y no vistas sin necesidad de entrenarlas previamente. Un reto pendiente para el trabajo futuro es mejorar el modelo para automatizar la desambiguación de las abreviaturas clínicas en tiempo de ejecución mediante la implementación de modelos de aprendizaje autosupervisados.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Israel González Carrasco.- Secretario: Leonardo Campillos Llanos.- Vocal: Ana María García Serran
- …