Enhancing Multimodal Patterns in Neuroimaging by Siamese Neural Networks with Self-Attention Mechanism.

Abstract

The combination of different sources of information is currently one of the most relevant aspects in the diagnostic process of several diseases. In the field of neurological disorders, different imaging modalities providing structural and functional information are frequently available. Those modalities are usually analyzed separately, although a joint of the features extracted from both sources can improve the classification performance of Computer-aided diagnosis (CAD) tools. Previous studies have computed independent models from each individual modality and combined then in a subsequent stage, which is not an optimum solution. In this work, we propose a method based on the principles of siamese neural networks to fuse information from Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). This framework quantifies the similarities between both modalities and relates them with the diagnostic label during the training process. The resulting latent space at the output of this network is then entered into an attention module in order to evaluate the relevance of each brain region and modality at different stages of the development of Alzheimer’s disease. The excellent results obtained and the high flexibility of the method proposed allows fusing more than two modalities, leading to a scalable methodology that can be used in a wide range of contexts.This work was supported by projects PGC2018- 098813-B-C32 and RTI2018-098913-B100 (Spanish “Ministerio de Ciencia, Innovación y Universidades”), UMA20-FEDERJA-086, A-TIC-080- UGR18 and P20 00525 (Consejería de economía y conocimiento, Junta de Andalucía) and by European Regional Development Funds (ERDF); and by Spanish “Ministerio de Universidades” through Margarita-Salas grant to J.E. Arco

    Similar works