15 research outputs found

    Visual evoked potential estimation by eigendecomposition

    Get PDF
    In this paper an eigendecomposition method is presented to estimate evoked potentials (EP). Taking into account of the characteristic of evoked potentials, the method uses two observations both of which contain desired EP signal and undesired EEG signal. If the desired and undesired signal are uncorrelated (i.e. they are orthgonal) and the signal-to-noise-ratios (SNR) of each observations are different, we can use the eigendecomposition method to separate EP signal from EEG. Visual evoked potentials (VEP) of humans have been estimated and good results obtained by this method.published_or_final_versio

    Blind signal separation for convolutive mixing environments using spatial-temporal processing

    Full text link
    In this paper we extend the infomax technique [1] for blind signal separation from the instantaneous mixing case to the convolutive mixing case. Separation in the convolutive case requires an unmixing system which uses present and past values of the observation vector, when the mixing system is causal. Thus, in developing an infomax process, both temporal and spatial dependence of the observations must be considered. We propose a stochastic gradient based structure which accomplishes this task. Performance of the proposed method is verified by subjective listening tests and quantitative measurements

    Robust acoustic beamforming in the presence of channel propagation uncertainties

    No full text
    Beamforming is a popular multichannel signal processing technique used in conjunction with microphone arrays to spatially filter a sound field. Conventional optimal beamformers assume that the propagation channels between each source and microphone pair are a deterministic function of the source and microphone geometry. However in real acoustic environments, there are several mechanisms that give rise to unpredictable variations in the phase and amplitudes of the propagation channels. In the presence of these uncertainties the performance of beamformers degrade. Robust beamformers are designed to reduce this performance degradation. However, robust beamformers rely on tuning parameters that are not closely related to the array geometry. By modeling the uncertainty in the acoustic channels explicitly we can derive more accurate expressions for the source-microphone channel variability. As such we are able to derive beamformers that are well suited to the application of acoustics in realistic environments. Through experiments we validate the acoustic channel models and through simulations we show the performance gains of the associated robust beamformer. Furthermore, by modeling the speech short time Fourier transform coefficients we are able to design a beamformer framework in the power domain. By utilising spectral subtraction we are able to see performance benefits over ideal conventional beamformers. Including the channel uncertainties models into the weights design improves robustness.Open Acces

    Characterizing neural mechanisms of attention-driven speech processing

    Get PDF

    The perceptual flow of phonetic feature processing

    Get PDF

    Cross-spectral synergy and consonant identification (A)

    Get PDF
    corecore