169 research outputs found
Spatial Filtering Pipeline Evaluation of Cortically Coupled Computer Vision System for Rapid Serial Visual Presentation
Rapid Serial Visual Presentation (RSVP) is a paradigm that supports the
application of cortically coupled computer vision to rapid image search. In
RSVP, images are presented to participants in a rapid serial sequence which can
evoke Event-related Potentials (ERPs) detectable in their Electroencephalogram
(EEG). The contemporary approach to this problem involves supervised spatial
filtering techniques which are applied for the purposes of enhancing the
discriminative information in the EEG data. In this paper we make two primary
contributions to that field: 1) We propose a novel spatial filtering method
which we call the Multiple Time Window LDA Beamformer (MTWLB) method; 2) we
provide a comprehensive comparison of nine spatial filtering pipelines using
three spatial filtering schemes namely, MTWLB, xDAWN, Common Spatial Pattern
(CSP) and three linear classification methods Linear Discriminant Analysis
(LDA), Bayesian Linear Regression (BLR) and Logistic Regression (LR). Three
pipelines without spatial filtering are used as baseline comparison. The Area
Under Curve (AUC) is used as an evaluation metric in this paper. The results
reveal that MTWLB and xDAWN spatial filtering techniques enhance the
classification performance of the pipeline but CSP does not. The results also
support the conclusion that LR can be effective for RSVP based BCI if
discriminative features are available
Locate and Beamform: Two-dimensional Locating All-neural Beamformer for Multi-channel Speech Separation
Recently, stunning improvements on multi-channel speech separation have been
achieved by neural beamformers when direction information is available.
However, most of them neglect to utilize speaker's 2-dimensional (2D) location
cues contained in mixture signal, which limits the performance when two sources
come from close directions. In this paper, we propose an end-to-end beamforming
network for 2D location guided speech separation merely given mixture signal.
It first estimates discriminable direction and 2D location cues, which imply
directions the sources come from in multi views of microphones and their 2D
coordinates. These cues are then integrated into location-aware neural
beamformer, thus allowing accurate reconstruction of two sources' speech
signals. Experiments show that our proposed model not only achieves a
comprehensive decent improvement compared to baseline systems, but avoids
inferior performance on spatial overlapping cases.Comment: Accepted by Interspeech 2023. arXiv admin note: substantial text
overlap with arXiv:2212.0340
- …