12 research outputs found
Audio-Visual Target Speaker Enhancement on Multi-Talker Environment using Event-Driven Cameras
We propose a method to address audio-visual target speaker enhancement in
multi-talker environments using event-driven cameras. State of the art
audio-visual speech separation methods shows that crucial information is the
movement of the facial landmarks related to speech production. However, all
approaches proposed so far work offline, using frame-based video input, making
it difficult to process an audio-visual signal with low latency, for online
applications. In order to overcome this limitation, we propose the use of
event-driven cameras and exploit compression, high temporal resolution and low
latency, for low cost and low latency motion feature extraction, going towards
online embedded audio-visual speech processing. We use the event-driven optical
flow estimation of the facial landmarks as input to a stacked Bidirectional
LSTM trained to predict an Ideal Amplitude Mask that is then used to filter the
noisy audio, to obtain the audio signal of the target speaker. The presented
approach performs almost on par with the frame-based approach, with very low
latency and computational cost.Comment: Accepted at ISCAS 202