239 research outputs found
Evaluating Graph Signal Processing for Neuroimaging Through Classification and Dimensionality Reduction
Graph Signal Processing (GSP) is a promising framework to analyze
multi-dimensional neuroimaging datasets, while taking into account both the
spatial and functional dependencies between brain signals. In the present work,
we apply dimensionality reduction techniques based on graph representations of
the brain to decode brain activity from real and simulated fMRI datasets. We
introduce seven graphs obtained from a) geometric structure and/or b)
functional connectivity between brain areas at rest, and compare them when
performing dimension reduction for classification. We show that mixed graphs
using both a) and b) offer the best performance. We also show that graph
sampling methods perform better than classical dimension reduction including
Principal Component Analysis (PCA) and Independent Component Analysis (ICA).Comment: 5 pages, GlobalSIP 201
Learning Audio Features with Metadata and Contrastive Learning
Methods based on supervised learning using annotations in an end-to-end
fashion have been the state-of-the-art for classification problems. However,
they may be limited in their generalization capability, especially in the low
data regime. In this study, we address this issue using supervised contrastive
learning combined with available metadata to solve multiple pretext tasks that
learn a good representation of data. We apply our approach on ICBHI, a
respiratory sound classification dataset suited for this setting. We show that
learning representations using only metadata, without class labels, obtains
similar performance as using cross entropy with those labels only. In addition,
we obtain state-of-the-art score when combining class labels with metadata
using multiple supervised contrastive learning. This work suggests the
potential of using multiple metadata sources in supervised contrastive
settings, in particular in settings with class imbalance and few data. Our code
is released at https://github.com/ilyassmoummad/scl_icbhi201
Regularized Contrastive Pre-training for Few-shot Bioacoustic Sound Detection
Bioacoustic sound event detection allows for better understanding of animal
behavior and for better monitoring biodiversity using audio. Deep learning
systems can help achieve this goal, however it is difficult to acquire
sufficient annotated data to train these systems from scratch. To address this
limitation, the Detection and Classification of Acoustic Scenes and Events
(DCASE) community has recasted the problem within the framework of few-shot
learning and organize an annual challenge for learning to detect animal sounds
from only five annotated examples. In this work, we regularize supervised
contrastive pre-training to learn features that can transfer well on new target
tasks with animal sounds unseen during training, achieving a high F-score of
61.52%(0.48) when no feature adaptation is applied, and an F-score of
68.19%(0.75) when we further adapt the learned features for each new target
task. This work aims to lower the entry bar to few-shot bioacoustic sound event
detection by proposing a simple and yet effective framework for this task, by
also providing open-source code
Pretraining Representations for Bioacoustic Few-shot Detection using Supervised Contrastive Learning
Deep learning has been widely used recently for sound event detection and
classification. Its success is linked to the availability of sufficiently large
datasets, possibly with corresponding annotations when supervised learning is
considered. In bioacoustic applications, most tasks come with few labelled
training data, because annotating long recordings is time consuming and costly.
Therefore supervised learning is not the best suited approach to solve
bioacoustic tasks. The bioacoustic community recasted the problem of sound
event detection within the framework of few-shot learning, i.e. training a
system with only few labeled examples. The few-shot bioacoustic sound event
detection task in the DCASE challenge focuses on detecting events in long audio
recordings given only five annotated examples for each class of interest. In
this paper, we show that learning a rich feature extractor from scratch can be
achieved by leveraging data augmentation using a supervised contrastive
learning framework. We highlight the ability of this framework to transfer well
for five-shot event detection on previously unseen classes in the training
data. We obtain an F-score of 63.46\% on the validation set and 42.7\% on the
test set, ranking second in the DCASE challenge. We provide an ablation study
for the critical choices of data augmentation techniques as well as for the
learning strategy applied on the training set
A dataset of acoustic measurements from soundscapes collected worldwide during the COVID-19 pandemic.
Political responses to the COVID-19 pandemic led to changes in city soundscapes around the globe. From March to October 2020, a consortium of 261 contributors from 35 countries brought together by the Silent Cities project built a unique soundscape recordings collection to report on local acoustic changes in urban areas. We present this collection here, along with metadata including observational descriptions of the local areas from the contributors, open-source environmental data, open-source confinement levels and calculation of acoustic descriptors. We performed a technical validation of the dataset using statistical models run on a subset of manually annotated soundscapes. Results confirmed the large-scale usability of ecoacoustic indices and automatic sound event recognition in the Silent Cities soundscape collection. We expect this dataset to be useful for research in the multidisciplinary field of environmental sciences
Capturing the speed of music in our heads: Developing methods for measuring the tempo of musical imagery
The experience of imagining music is a common phenomenon. Musicians use mental rehearsal to help them memorize and prepare for performances, and even non-musicians regularly experience “earworms”, i.e., having a tune stuck in one’s head on repeat. Voluntarily imagined music is highly accurate in terms of pitch, rhythm, and timbre and recruits brain regions that are remarkably similar to those recruited in perceiving music.In terms of tempo, it has been found that even non-musicians can sing familiar pop songs very close to the original recorded tempo. This implies that the tempo of imagery is quite accurate, as participants must generate an image of a song before singing it aloud. However, this has not been previously tested in such a way that the imagery remains purely imagined, without becoming a sound production task. As such, the first aim of the present study is to test the accuracy of tempo judgments for purely imagined songs. The second aim is to explore the influence of individual differences on these tempo judgments, including previous musical training, musical engagement, general auditory imagery abilities, and familiarity with the stimuli.We utilized three methods of measuring each participant’s memory for the tempo of 12 familiar pop songs: 1) tapping to the beat of each song whilst imagining the song (hereafter Imagery (motor) task), 2) adjusting the speed of a click track to the beat of each song, again whilst imagining (hereafter Imagery (non-motor) task), and 3) adjusting the speed of each song whilst hearing the actual songs aloud (hereafter Perceived Music task). It was hypothesized that participants would perform most accurately in the Perceived Music condition””where all musical cues were present, but that motor engagement with musical imagery (in the Imagery (motor) task) would also improve performance, in line with previous literature.Significant differences were found in performance between all three tasks, such that performance on the Perceived Music task was significantly more accurate than in the Imagery tasks, and performance in the Imagery (motor) task was significantly more accurate than in the Imagery (non-motor) task. Performance in the Imagery tasks was also influenced by individual differences in musical training and/or engagement, whilst performance on the Perceived Music task was only influenced by previous familiarity with the musical stimuli.The results of the study help to inform us as to precisely how accurately tempo is preserved within musical imagery, and how this is modulated by other factors such as musical training and familiarity. The findings also have implications within the domain of mental music rehearsal.Keywords: musical imagery, tempo, musical memor
A Strong and Simple Deep Learning Baseline for BCI MI Decoding
We propose EEG-SimpleConv, a straightforward 1D convolutional neural network
for Motor Imagery decoding in BCI. Our main motivation is to propose a very
simple baseline to compare to, using only very standard ingredients from the
literature. We evaluate its performance on four EEG Motor Imagery datasets,
including simulated online setups, and compare it to recent Deep Learning and
Machine Learning approaches. EEG-SimpleConv is at least as good or far more
efficient than other approaches, showing strong knowledge-transfer capabilities
across subjects, at the cost of a low inference time. We advocate that using
off-the-shelf ingredients rather than coming with ad-hoc solutions can
significantly help the adoption of Deep Learning approaches for BCI. We make
the code of the models and the experiments accessible
Spatial Graph Signal Interpolation with an Application for Merging BCI Datasets with Various Dimensionalities
BCI Motor Imagery datasets usually are small and have different electrodes
setups. When training a Deep Neural Network, one may want to capitalize on all
these datasets to increase the amount of data available and hence obtain good
generalization results. To this end, we introduce a spatial graph signal
interpolation technique, that allows to interpolate efficiently multiple
electrodes. We conduct a set of experiments with five BCI Motor Imagery
datasets comparing the proposed interpolation with spherical splines
interpolation. We believe that this work provides novel ideas on how to
leverage graphs to interpolate electrodes and on how to homogenize multiple
datasets.Comment: Submitted to the 2023 IEEE International Conference on Acoustics,
Speech, and Signal Processing (ICASSP 2023
Mutation of the Protein Kinase C Site in Borna Disease Virus Phosphoprotein Abrogates Viral Interference with Neuronal Signaling and Restores Normal Synaptic Activity
Understanding the pathogenesis of infection by neurotropic viruses represents a major challenge and may improve our knowledge of many human neurological diseases for which viruses are thought to play a role. Borna disease virus (BDV) represents an attractive model system to analyze the molecular mechanisms whereby a virus can persist in the central nervous system (CNS) and lead to altered brain function, in the absence of overt cytolysis or inflammation. Recently, we showed that BDV selectively impairs neuronal plasticity through interfering with protein kinase C (PKC)–dependent signaling in neurons. Here, we tested the hypothesis that BDV phosphoprotein (P) may serve as a PKC decoy substrate when expressed in neurons, resulting in an interference with PKC-dependent signaling and impaired neuronal activity. By using a recombinant BDV with mutated PKC phosphorylation site on P, we demonstrate the central role of this protein in BDV pathogenesis. We first showed that the kinetics of dissemination of this recombinant virus was strongly delayed, suggesting that phosphorylation of P by PKC is required for optimal viral spread in neurons. Moreover, neurons infected with this mutant virus exhibited a normal pattern of phosphorylation of the PKC endogenous substrates MARCKS and SNAP-25. Finally, activity-dependent modulation of synaptic activity was restored, as assessed by measuring calcium dynamics in response to depolarization and the electrical properties of neuronal networks grown on microelectrode arrays. Therefore, preventing P phosphorylation by PKC abolishes viral interference with neuronal activity in response to stimulation. Our findings illustrate a novel example of viral interference with a differentiated neuronal function, mainly through competition with the PKC signaling pathway. In addition, we provide the first evidence that a viral protein can specifically interfere with stimulus-induced synaptic plasticity in neurons
- …