This paper focuses on the detection of Parkinson's disease based on the
analysis of a patient's gait. The growing popularity and success of Transformer
networks in natural language processing and image recognition motivated us to
develop a novel method for this problem based on an automatic features
extraction via Transformers. The use of Transformers in 1D signal is not really
widespread yet, but we show in this paper that they are effective in extracting
relevant features from 1D signals. As Transformers require a lot of memory, we
decoupled temporal and spatial information to make the model smaller. Our
architecture used temporal Transformers, dimension reduction layers to reduce
the dimension of the data, a spatial Transformer, two fully connected layers
and an output layer for the final prediction. Our model outperforms the current
state-of-the-art algorithm with 95.2\% accuracy in distinguishing a
Parkinsonian patient from a healthy one on the Physionet dataset. A key
learning from this work is that Transformers allow for greater stability in
results. The source code and pre-trained models are released in
https://github.com/DucMinhDimitriNguyen/Transformers-for-1D-signals-in-Parkinson-s-disease-detection-from-gait.gitComment: International Conference on Pattern Recognition (ICPR 2022