In recent years, artificial intelligence has played an important role in
medicine and disease diagnosis, with many applications to be mentioned, one of
which is Medical Visual Question Answering (MedVQA). By combining computer
vision and natural language processing, MedVQA systems can assist experts in
extracting relevant information from medical image based on a given question
and providing precise diagnostic answers. The ImageCLEFmed-MEDVQA-GI-2023
challenge carried out visual question answering task in the gastrointestinal
domain, which includes gastroscopy and colonoscopy images. Our team approached
Task 1 of the challenge by proposing a multimodal learning method with image
enhancement to improve the VQA performance on gastrointestinal images. The
multimodal architecture is set up with BERT encoder and different pre-trained
vision models based on convolutional neural network (CNN) and Transformer
architecture for features extraction from question and endoscopy image. The
result of this study highlights the dominance of Transformer-based vision
models over the CNNs and demonstrates the effectiveness of the image
enhancement process, with six out of the eight vision models achieving better
F1-Score. Our best method, which takes advantages of BERT+BEiT fusion and image
enhancement, achieves up to 87.25% accuracy and 91.85% F1-Score on the
development test set, while also producing good result on the private test set
with accuracy of 82.01%.Comment: ImageCLEF2023 published version:
https://ceur-ws.org/Vol-3497/paper-129.pd