Deep learning has revolutionized computer vision utilizing the increased
availability of big data and the power of parallel computational units such as
graphical processing units. The vast majority of deep learning research is
conducted using images as training data, however the biomedical domain is rich
in physiological signals that are used for diagnosis and prediction problems.
It is still an open research question how to best utilize signals to train deep
neural networks.
In this paper we define the term Signal2Image (S2Is) as trainable or
non-trainable prefix modules that convert signals, such as
Electroencephalography (EEG), to image-like representations making them
suitable for training image-based deep neural networks defined as `base
models'. We compare the accuracy and time performance of four S2Is (`signal as
image', spectrogram, one and two layer Convolutional Neural Networks (CNNs))
combined with a set of `base models' (LeNet, AlexNet, VGGnet, ResNet, DenseNet)
along with the depth-wise and 1D variations of the latter. We also provide
empirical evidence that the one layer CNN S2I performs better in eleven out of
fifteen tested models than non-trainable S2Is for classifying EEG signals and
we present visual comparisons of the outputs of the S2Is.Comment: 4 pages, 2 figures, 1 table, EMBC 201