The aim of this paper is to investigate\ud how much the effectiveness of a Question\ud Answering (QA) system was affected\ud by the performance of Machine\ud Translation (MT) based question translation.\ud Nearly 200 questions were selected\ud from TREC QA tracks and ran through a\ud question answering system. It was able to\ud answer 42.6% of the questions correctly\ud in a monolingual run. These questions\ud were then translated manually from English\ud into Arabic and back into English using\ud an MT system, and then re-applied to\ud the QA system. The system was able to\ud answer 10.2% of the translated questions.\ud An analysis of what sort of translation error\ud affected which questions was conducted,\ud concluding that factoid type\ud questions are less prone to translation error\ud than others
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.