4 research outputs found

    Signal2Image Modules in Deep Neural Networks for EEG Classification

    Full text link
    Deep learning has revolutionized computer vision utilizing the increased availability of big data and the power of parallel computational units such as graphical processing units. The vast majority of deep learning research is conducted using images as training data, however the biomedical domain is rich in physiological signals that are used for diagnosis and prediction problems. It is still an open research question how to best utilize signals to train deep neural networks. In this paper we define the term Signal2Image (S2Is) as trainable or non-trainable prefix modules that convert signals, such as Electroencephalography (EEG), to image-like representations making them suitable for training image-based deep neural networks defined as `base models'. We compare the accuracy and time performance of four S2Is (`signal as image', spectrogram, one and two layer Convolutional Neural Networks (CNNs)) combined with a set of `base models' (LeNet, AlexNet, VGGnet, ResNet, DenseNet) along with the depth-wise and 1D variations of the latter. We also provide empirical evidence that the one layer CNN S2I performs better in eleven out of fifteen tested models than non-trainable S2Is for classifying EEG signals and we present visual comparisons of the outputs of the S2Is.Comment: 4 pages, 2 figures, 1 table, EMBC 201

    Biometric data and machine learning methods in the diagnosis and monitoring of neurodegenerative diseases: a review

    Get PDF
    ΠŸΡ€Π΅Π΄ΡΡ‚Π°Π²Π»Π΅Π½ ΠΎΠ±Π·ΠΎΡ€ Π½Π΅ΠΈΠ½Π²Π°Π·ΠΈΠ²Π½Ρ‹Ρ… биомСтричСских ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠ² выявлСния ΠΈ прогнозирования развития Π½Π΅ΠΉΡ€ΠΎΠ΄Π΅Π³Π΅Π½Π΅Ρ€Π°Ρ‚ΠΈΠ²Π½Ρ‹Ρ… Π·Π°Π±ΠΎΠ»Π΅Π²Π°Π½ΠΈΠΉ. Π”Π°Π½ Π°Π½Π°Π»ΠΈΠ· Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹Ρ… ΠΌΠΎΠ΄Π°Π»ΡŒΠ½ΠΎΡΡ‚Π΅ΠΉ, ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅ΠΌΡ‹Ρ… для диагностики ΠΈ ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΈΠ½Π³Π°. РассмотрСны Ρ‚Π°ΠΊΠΈΠ΅ ΠΌΠΎΠ΄Π°Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ, ΠΊΠ°ΠΊ рукописныС Π΄Π°Π½Π½Ρ‹Π΅, элСктроэнцСфалограмма, Ρ€Π΅Ρ‡ΡŒ, ΠΏΠΎΡ…ΠΎΠ΄ΠΊΠ°, Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΠ΅ Π³Π»Π°Π·, Π° Ρ‚Π°ΠΊΠΆΠ΅ использованиС ΠΊΠΎΠΌΠΏΠΎΠ·ΠΈΡ†ΠΈΠΉ Π΄Π°Π½Π½Ρ‹Ρ… ΠΌΠΎΠ΄Π°Π»ΡŒΠ½ΠΎΡΡ‚Π΅ΠΉ. ΠŸΡ€ΠΎΠ²Π΅Π΄Π΅Π½ ΠΏΠΎΠ΄Ρ€ΠΎΠ±Π½Ρ‹ΠΉ Π°Π½Π°Π»ΠΈΠ· соврСмСнных ΠΌΠ΅Ρ‚ΠΎΠ΄ΠΎΠ² ΠΈ систСм принятия Ρ€Π΅ΡˆΠ΅Π½ΠΈΠΉ, основанных Π½Π° машинном ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠΈ. ΠŸΡ€Π΅Π΄ΡΡ‚Π°Π²Π»Π΅Π½Ρ‹ Π½Π°Π±ΠΎΡ€Ρ‹ Π΄Π°Π½Π½Ρ‹Ρ…, ΠΌΠ΅Ρ‚ΠΎΠ΄Ρ‹ ΠΏΡ€Π΅Π΄ΠΎΠ±Ρ€Π°Π±ΠΎΡ‚ΠΊΠΈ, ΠΌΠΎΠ΄Π΅Π»ΠΈ машинного обучСния, ΠΎΡ†Π΅Π½ΠΊΠΈ точности ΠΏΡ€ΠΈ диагностикС Π·Π°Π±ΠΎΠ»Π΅Π²Π°Π½ΠΈΠΉ. Π’ Π·Π°ΠΊΠ»ΡŽΡ‡Π΅Π½ΠΈΠΈ рассмотрСны Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠ΅ ΠΎΡ‚ΠΊΡ€Ρ‹Ρ‚Ρ‹Π΅ ΠΏΡ€ΠΎΠ±Π»Π΅ΠΌΡ‹ ΠΈ Π±ΡƒΠ΄ΡƒΡ‰ΠΈΠ΅ пСрспСктивы исслСдований Π² Π΄Π°Π½Π½ΠΎΠΌ Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠΈ.Π Π°Π±ΠΎΡ‚Π° Π²Ρ‹ΠΏΠΎΠ»Π½Π΅Π½Π° ΠΏΡ€ΠΈ ΠΏΠΎΠ΄Π΄Π΅Ρ€ΠΆΠΊΠ΅ Российского Π½Π°-ΡƒΡ‡Π½ΠΎΠ³ΠΎ Ρ„ΠΎΠ½Π΄Π° (ΠΏΡ€ΠΎΠ΅ΠΊΡ‚ β„– 22-21-00021)

    Sparsely Activated Networks: A new method for decomposing and compressing data

    Full text link
    Recent literature on unsupervised learning focused on designing structural priors with the aim of learning meaningful features, but without considering the description length of the representations. In this thesis, first we introduce the{\phi}metric that evaluates unsupervised models based on their reconstruction accuracy and the degree of compression of their internal representations. We then present and define two activation functions (Identity, ReLU) as base of reference and three sparse activation functions (top-k absolutes, Extrema-Pool indices, Extrema) as candidate structures that minimize the previously defined metric Ο†\varphi. We lastly present Sparsely Activated Networks (SANs) that consist of kernels with shared weights that, during encoding, are convolved with the input and then passed through a sparse activation function. During decoding, the same weights are convolved with the sparse activation map and subsequently the partial reconstructions from each weight are summed to reconstruct the input. We compare SANs using the five previously defined activation functions on a variety of datasets (Physionet, UCI-epilepsy, MNIST, FMNIST) and show that models that are selected using Ο†\varphi have small description representation length and consist of interpretable kernels.Comment: PhD Thesis in Greek, 158 pages for the main text, 23 supplementary pages for presentation, arXiv:1907.06592, arXiv:1904.13216, arXiv:1902.1112
    corecore