24,499 research outputs found
Robust ASR using Support Vector Machines
The improved theoretical properties of Support Vector Machines with respect to other machine learning alternatives due to their max-margin training paradigm have led us to suggest them as a good technique for robust speech recognition. However, important shortcomings have had to be circumvented, the most important being the normalisation of the time duration of different realisations of the acoustic speech units.
In this paper, we have compared two approaches in noisy environments: first, a hybrid HMM–SVM solution where a fixed number of frames is selected by means of an HMM segmentation and second, a normalisation kernel called Dynamic Time Alignment Kernel (DTAK) first introduced in Shimodaira et al. [Shimodaira, H., Noma, K., Nakai, M., Sagayama, S., 2001. Support vector machine with dynamic time-alignment kernel for speech recognition. In: Proc. Eurospeech, Aalborg, Denmark, pp. 1841–1844] and based on DTW (Dynamic Time Warping). Special attention has been paid to the adaptation of both alternatives to noisy environments, comparing two types of parameterisations and performing suitable feature normalisation operations. The results show that the DTA Kernel provides important advantages over the baseline HMM system in medium to bad noise conditions, also outperforming the results of the hybrid system.Publicad
Automatic Environmental Sound Recognition: Performance versus Computational Cost
In the context of the Internet of Things (IoT), sound sensing applications
are required to run on embedded platforms where notions of product pricing and
form factor impose hard constraints on the available computing power. Whereas
Automatic Environmental Sound Recognition (AESR) algorithms are most often
developed with limited consideration for computational cost, this article seeks
which AESR algorithm can make the most of a limited amount of computing power
by comparing the sound classification performance em as a function of its
computational cost. Results suggest that Deep Neural Networks yield the best
ratio of sound classification accuracy across a range of computational costs,
while Gaussian Mixture Models offer a reasonable accuracy at a consistently
small cost, and Support Vector Machines stand between both in terms of
compromise between accuracy and computational cost
On the Depth of Deep Neural Networks: A Theoretical View
People believe that depth plays an important role in success of deep neural
networks (DNN). However, this belief lacks solid theoretical justifications as
far as we know. We investigate role of depth from perspective of margin bound.
In margin bound, expected error is upper bounded by empirical margin error plus
Rademacher Average (RA) based capacity term. First, we derive an upper bound
for RA of DNN, and show that it increases with increasing depth. This indicates
negative impact of depth on test performance. Second, we show that deeper
networks tend to have larger representation power (measured by Betti numbers
based complexity) than shallower networks in multi-class setting, and thus can
lead to smaller empirical margin error. This implies positive impact of depth.
The combination of these two results shows that for DNN with restricted number
of hidden units, increasing depth is not always good since there is a tradeoff
between positive and negative impacts. These results inspire us to seek
alternative ways to achieve positive impact of depth, e.g., imposing
margin-based penalty terms to cross entropy loss so as to reduce empirical
margin error without increasing depth. Our experiments show that in this way,
we achieve significantly better test performance.Comment: AAAI 201
- …