10,514 research outputs found
A Computational Model of Auditory Feature Extraction and Sound Classification
This thesis introduces a computer model that incorporates responses similar to
those found in the cochlea, in sub-corticai auditory processing, and in auditory
cortex. The principle aim of this work is to show that this can form the basis
for a biologically plausible mechanism of auditory stimulus classification. We will
show that this classification is robust to stimulus variation and time compression.
In addition, the response of the system is shown to support multiple, concurrent,
behaviourally relevant classifications of natural stimuli (speech).
The model incorporates transient enhancement, an ensemble of spectro -
temporal filters, and a simple measure analogous to the idea of visual salience
to produce a quasi-static description of the stimulus suitable either for classification
with an analogue artificial neural network or, using appropriate rate coding,
a classifier based on artificial spiking neurons. We also show that the spectotemporal
ensemble can be derived from a limited class of 'formative' stimuli, consistent
with a developmental interpretation of ensemble formation. In addition,
ensembles chosen on information theoretic grounds consist of filters with relatively
simple geometries, which is consistent with reports of responses in mammalian
thalamus and auditory cortex.
A powerful feature of this approach is that the ensemble response, from
which salient auditory events are identified, amounts to stimulus-ensemble driven
method of segmentation which respects the envelope of the stimulus, and leads
to a quasi-static representation of auditory events which is suitable for spike rate
coding.
We also present evidence that the encoded auditory events may form the
basis of a representation-of-similarity, or second order isomorphism, which implies
a representational space that respects similarity relationships between stimuli
including novel stimuli
Acoustic Scene Classification
This work was supported by the Centre for Digital Music Platform (grant EP/K009559/1) and a Leadership Fellowship
(EP/G007144/1) both from the United Kingdom Engineering and Physical Sciences Research Council
A Compact and Discriminative Feature Based on Auditory Summary Statistics for Acoustic Scene Classification
One of the biggest challenges of acoustic scene classification (ASC) is to
find proper features to better represent and characterize environmental sounds.
Environmental sounds generally involve more sound sources while exhibiting less
structure in temporal spectral representations. However, the background of an
acoustic scene exhibits temporal homogeneity in acoustic properties, suggesting
it could be characterized by distribution statistics rather than temporal
details. In this work, we investigated using auditory summary statistics as the
feature for ASC tasks. The inspiration comes from a recent neuroscience study,
which shows the human auditory system tends to perceive sound textures through
time-averaged statistics. Based on these statistics, we further proposed to use
linear discriminant analysis to eliminate redundancies among these statistics
while keeping the discriminative information, providing an extreme com-pact
representation for acoustic scenes. Experimental results show the outstanding
performance of the proposed feature over the conventional handcrafted features.Comment: Accepted as a conference paper of Interspeech 201
Automatic Environmental Sound Recognition: Performance versus Computational Cost
In the context of the Internet of Things (IoT), sound sensing applications
are required to run on embedded platforms where notions of product pricing and
form factor impose hard constraints on the available computing power. Whereas
Automatic Environmental Sound Recognition (AESR) algorithms are most often
developed with limited consideration for computational cost, this article seeks
which AESR algorithm can make the most of a limited amount of computing power
by comparing the sound classification performance em as a function of its
computational cost. Results suggest that Deep Neural Networks yield the best
ratio of sound classification accuracy across a range of computational costs,
while Gaussian Mixture Models offer a reasonable accuracy at a consistently
small cost, and Support Vector Machines stand between both in terms of
compromise between accuracy and computational cost
- …