Fine-Tuning Sign Language Translation Systems Through Deep Reinforcement Learning

Abstract

Sign language is an important communication tool for a vast majority of deaf and hard-of-hearing (DHH) people. Data collected by the World Health organization states that there are 466 million people currently with hearing loss and that number could rise to 630 million by 2030 and over 930 million by 2050 \cite{DHH1}. Currently there are millions of sign language speakers around the world who utilize this skill on a daily basis. Bridging the gap between those who communicate solely with a spoken language and the DHH community is an ever-growing and omnipresent need. Unfortunately, in the field of natural language processing, sign language recognition and translation lags far behind its spoken language counterparts. The following research will seek to successfully leverage the field of Deep Reinforcement Learning (DRL) to make a significant improvement in the task of Sign Language Translation (SLT) with German Sign Language videos to German text sentences. To do this three major experiments are conducted. The first experiment examines the effects of Self-critical Sequence Training (SCST) when fine-tuning a simple Recurrent Neural Network (RNN) Long Short-Term Memory (LSTM) based sequence-to-sequence model. The second experiment takes the same SCST algorithm and applies it to a more powerful transformer based model. And the final experiment utilizes the Proximal Policy Optimization (PPO) algorithm alongside a novel fine-tuning process on the same transformer model. By using this approach of estimating the reward signal and normalization while optimizing for the model\u27s test-time greedy inference procedure we aim to establish a new or comparable SOTA result on the RWTH-PHOENIX-Weather-2014 T German sign language dataset

    Similar works