49 research outputs found
Evidence of Task-Independent Person-Specific Signatures in EEG using Subspace Techniques
Electroencephalography (EEG) signals are promising as alternatives to other
biometrics owing to their protection against spoofing. Previous studies have
focused on capturing individual variability by analyzing
task/condition-specific EEG. This work attempts to model biometric signatures
independent of task/condition by normalizing the associated variance. Toward
this goal, the paper extends ideas from subspace-based text-independent speaker
recognition and proposes novel modifications for modeling multi-channel EEG
data. The proposed techniques assume that biometric information is present in
the entire EEG signal and accumulate statistics across time in a high
dimensional space. These high dimensional statistics are then projected to a
lower dimensional space where the biometric information is preserved. The lower
dimensional embeddings obtained using the proposed approach are shown to be
task-independent. The best subspace system identifies individuals with
accuracies of 86.4% and 35.9% on datasets with 30 and 920 subjects,
respectively, using just nine EEG channels. The paper also provides insights
into the subspace model's scalability to unseen tasks and individuals during
training and the number of channels needed for subspace modeling.Comment: \copyright 2021 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email
This paper describes a novel method by which a dialogue agent can learn to choose an optimal dialogue strategy. While it is widely agreed that dialogue strategies should be formulated in terms of communicative intentions, there has been little work on automatically optimizing an agent's choices when there are multiple ways to realize a communicative intention. Our method is based on a combination of learning algorithms and empirical evaluation techniques. The learning component of our method is based on algorithms for reinforcement learning, such as dynamic programming and Q-learning. The empirical component uses the PARADISE evaluation framework (Walker et al., 1997) to identify the important performance factors and to provide the performance function needed by the learning algorithm. We illustrate our method with a dialogue agent named ELVIS (EmaiL Voice Interactive System) , that supports access to email over the phone. We show how ELVIS can learn to choose among alternate strategie..
An Empirical Study of Speech Processing in the Brain by Analyzing the Temporal Syllable Structure in Speech-input Induced EEG
© 2019 IEEE. Clinical applicability of electroencephalography (EEG) is well established, however the use of EEG as a choice for constructing brain computer interfaces to develop communication platforms is relatively recent. To provide more natural means of communication, there is an increasing focus on bringing together speech and EEG signal processing. Quantifying the way our brain processes speech is one way of approaching the problem of speech recognition using brain waves. This paper analyses the feasibility of recognizing syllable level units by studying the temporal structure of speech reflected in the EEG signals. The slowly varying component of the delta band EEG(0.3-3Hz) is present in all other EEG frequency bands. Analysis shows that removing the delta trend in EEG signals results in signals that reveals syllable like structure. Using a 25 syllable framework, classification of EEG data obtained from 13 subjects yields promising results, underscoring the potential of revealing speech related temporal structure in EEG