331 research outputs found
Cleansing Jewel: A Neural Spelling Correction Model Built On Google OCR-ed Tibetan Manuscripts
Scholars in the humanities rely heavily on ancient manuscripts to study
history, religion, and socio-political structures in the past. Many efforts
have been devoted to digitizing these precious manuscripts using OCR
technology, but most manuscripts were blemished over the centuries so that an
Optical Character Recognition (OCR) program cannot be expected to capture faded
graphs and stains on pages. This work presents a neural spelling correction
model built on Google OCR-ed Tibetan Manuscripts to auto-correct OCR-ed noisy
output. This paper is divided into four sections: dataset, model architecture,
training and analysis. First, we feature-engineered our raw Tibetan etext
corpus into two sets of structured data frames -- a set of paired toy data and
a set of paired real data. Then, we implemented a Confidence Score mechanism
into the Transformer architecture to perform spelling correction tasks.
According to the Loss and Character Error Rate, our Transformer + Confidence
score mechanism architecture proves to be superior to Transformer, LSTM-2-LSTM
and GRU-2-GRU architectures. Finally, to examine the robustness of our model,
we analyzed erroneous tokens, visualized Attention and Self-Attention heatmaps
in our model
Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS
Existing sentence textual similarity benchmark datasets only use a single
number to summarize how similar the sentence encoder's decision is to humans'.
However, it is unclear what kind of sentence pairs a sentence encoder (SE)
would consider similar. Moreover, existing SE benchmarks mainly consider
sentence pairs with low lexical overlap, so it is unclear how the SEs behave
when two sentences have high lexical overlap. We introduce a high-quality SE
diagnostic dataset, HEROS. HEROS is constructed by transforming an original
sentence into a new sentence based on certain rules to form a \textit{minimal
pair}, and the minimal pair has high lexical overlaps. The rules include
replacing a word with a synonym, an antonym, a typo, a random word, and
converting the original sentence into its negation. Different rules yield
different subsets of HEROS. By systematically comparing the performance of over
60 supervised and unsupervised SEs on HEROS, we reveal that most unsupervised
sentence encoders are insensitive to negation. We find the datasets used to
train the SE are the main determinants of what kind of sentence pairs an SE
considers similar. We also show that even if two SEs have similar performance
on STS benchmarks, they can have very different behavior on HEROS. Our result
reveals the blind spot of traditional STS benchmarks when evaluating SEs.Comment: ACL 2023 repl4nlp (representation learning for NLP) workshop poster
paper. Dataset at https://huggingface.co/datasets/dcml0714/Hero
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models
Despite their impressive capabilities, large language models (LLMs) are prone
to hallucinations, i.e., generating content that deviates from facts seen
during pretraining. We propose a simple decoding strategy for reducing
hallucinations with pretrained LLMs that does not require conditioning on
retrieved external knowledge nor additional fine-tuning. Our approach obtains
the next-token distribution by contrasting the differences in logits obtained
from projecting the later layers versus earlier layers to the vocabulary space,
exploiting the fact that factual knowledge in an LLMs has generally been shown
to be localized to particular transformer layers. We find that this Decoding by
Contrasting Layers (DoLa) approach is able to better surface factual knowledge
and reduce the generation of incorrect facts. DoLa consistently improves the
truthfulness across multiple choices tasks and open-ended generation tasks, for
example improving the performance of LLaMA family models on TruthfulQA by
12-17% absolute points, demonstrating its potential in making LLMs reliably
generate truthful facts.Comment: ICLR 2024 main conference paper. The source code is available at
https://github.com/voidism/DoL
Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
How can we perform computations over natural language representations to
solve tasks that require symbolic and numeric reasoning? We propose natural
language embedded programs (NLEP) as a unifying framework for addressing
math/symbolic reasoning, natural language understanding, and instruction
following tasks. Our approach prompts a language model to generate full Python
programs that define functions over data structures which contain natural
language representations of structured knowledge. A Python interpreter then
executes the generated code and prints the output. Despite using a task-general
prompt, we find that this approach can improve upon strong baselines across a
range of different tasks including math and symbolic reasoning, text
classification, question answering, and instruction following. We further find
the generated programs are often interpretable and enable post-hoc verification
of the intermediate reasoning steps
C2KD: Cross-Lingual Cross-Modal Knowledge Distillation for Multilingual Text-Video Retrieval
Multilingual text-video retrieval methods have improved significantly in
recent years, but the performance for other languages lags behind English. We
propose a Cross-Lingual Cross-Modal Knowledge Distillation method to improve
multilingual text-video retrieval. Inspired by the fact that English text-video
retrieval outperforms other languages, we train a student model using input
text in different languages to match the cross-modal predictions from teacher
models using input text in English. We propose a cross entropy based objective
which forces the distribution over the student's text-video similarity scores
to be similar to those of the teacher models. We introduce a new multilingual
video dataset, Multi-YouCook2, by translating the English captions in the
YouCook2 video dataset to 8 other languages. Our method improves multilingual
text-video retrieval performance on Multi-YouCook2 and several other datasets
such as Multi-MSRVTT and VATEX. We also conducted an analysis on the
effectiveness of different multilingual text models as teachers
SemStamp: A Semantic Watermark with Paraphrastic Robustness for Text Generation
Existing watermarking algorithms are vulnerable to paraphrase attacks because
of their token-level design. To address this issue, we propose SemStamp, a
robust sentence-level semantic watermarking algorithm based on
locality-sensitive hashing (LSH), which partitions the semantic space of
sentences. The algorithm encodes and LSH-hashes a candidate sentence generated
by an LLM, and conducts sentence-level rejection sampling until the sampled
sentence falls in watermarked partitions in the semantic embedding space. A
margin-based constraint is used to enhance its robustness. To show the
advantages of our algorithm, we propose a "bigram" paraphrase attack using the
paraphrase that has the fewest bigram overlaps with the original sentence. This
attack is shown to be effective against the existing token-level watermarking
method. Experimental results show that our novel semantic watermark algorithm
is not only more robust than the previous state-of-the-art method on both
common and bigram paraphrase attacks, but also is better at preserving the
quality of generation
- …
