49,101 research outputs found
Efficient Invariant Features for Sensor Variability Compensation in Speaker Recognition
In this paper, we investigate the use of invariant features for speaker recognition. Owing to their characteristics, these features are introduced to cope with the difficult and challenging problem of sensor variability and the source of performance degradation inherent in speaker recognition systems. Our experiments show: (1) the effectiveness of these features in match cases; (2) the benefit of combining these features with the mel frequency cepstral coefficients to exploit their discrimination power under uncontrolled conditions (mismatch cases). Consequently, the proposed invariant features result in a performance improvement as demonstrated by a reduction in the equal error rate and the minimum decision cost function compared to the GMM-UBM speaker recognition systems based on MFCC features
Multimodal person recognition for human-vehicle interaction
Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies
Detecting User Engagement in Everyday Conversations
This paper presents a novel application of speech emotion recognition:
estimation of the level of conversational engagement between users of a voice
communication system. We begin by using machine learning techniques, such as
the support vector machine (SVM), to classify users' emotions as expressed in
individual utterances. However, this alone fails to model the temporal and
interactive aspects of conversational engagement. We therefore propose the use
of a multilevel structure based on coupled hidden Markov models (HMM) to
estimate engagement levels in continuous natural speech. The first level is
comprised of SVM-based classifiers that recognize emotional states, which could
be (e.g.) discrete emotion types or arousal/valence levels. A high-level HMM
then uses these emotional states as input, estimating users' engagement in
conversation by decoding the internal states of the HMM. We report experimental
results obtained by applying our algorithms to the LDC Emotional Prosody and
CallFriend speech corpora.Comment: 4 pages (A4), 1 figure (EPS
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
Harnessing AI for Speech Reconstruction using Multi-view Silent Video Feed
Speechreading or lipreading is the technique of understanding and getting
phonetic features from a speaker's visual features such as movement of lips,
face, teeth and tongue. It has a wide range of multimedia applications such as
in surveillance, Internet telephony, and as an aid to a person with hearing
impairments. However, most of the work in speechreading has been limited to
text generation from silent videos. Recently, research has started venturing
into generating (audio) speech from silent video sequences but there have been
no developments thus far in dealing with divergent views and poses of a
speaker. Thus although, we have multiple camera feeds for the speech of a user,
but we have failed in using these multiple video feeds for dealing with the
different poses. To this end, this paper presents the world's first ever
multi-view speech reading and reconstruction system. This work encompasses the
boundaries of multimedia research by putting forth a model which leverages
silent video feeds from multiple cameras recording the same subject to generate
intelligent speech for a speaker. Initial results confirm the usefulness of
exploiting multiple camera views in building an efficient speech reading and
reconstruction system. It further shows the optimal placement of cameras which
would lead to the maximum intelligibility of speech. Next, it lays out various
innovative applications for the proposed system focusing on its potential
prodigious impact in not just security arena but in many other multimedia
analytics problems.Comment: 2018 ACM Multimedia Conference (MM '18), October 22--26, 2018, Seoul,
Republic of Kore
Continuous Authentication for Voice Assistants
Voice has become an increasingly popular User Interaction (UI) channel,
mainly contributing to the ongoing trend of wearables, smart vehicles, and home
automation systems. Voice assistants such as Siri, Google Now and Cortana, have
become our everyday fixtures, especially in scenarios where touch interfaces
are inconvenient or even dangerous to use, such as driving or exercising.
Nevertheless, the open nature of the voice channel makes voice assistants
difficult to secure and exposed to various attacks as demonstrated by security
researchers. In this paper, we present VAuth, the first system that provides
continuous and usable authentication for voice assistants. We design VAuth to
fit in various widely-adopted wearable devices, such as eyeglasses,
earphones/buds and necklaces, where it collects the body-surface vibrations of
the user and matches it with the speech signal received by the voice
assistant's microphone. VAuth guarantees that the voice assistant executes only
the commands that originate from the voice of the owner. We have evaluated
VAuth with 18 users and 30 voice commands and find it to achieve an almost
perfect matching accuracy with less than 0.1% false positive rate, regardless
of VAuth's position on the body and the user's language, accent or mobility.
VAuth successfully thwarts different practical attacks, such as replayed
attacks, mangled voice attacks, or impersonation attacks. It also has low
energy and latency overheads and is compatible with most existing voice
assistants
- …