1 research outputs found

    Data mining on urban sound sensor networks

    Get PDF
    ICA 2016, 22nd International Congress on Acoustics, BUENOS AIRES, ARGENTINE, 05-/09/2016 - 09/09/2016Urban sound sensor networks deliver megabytes of data on a daily basis so the question on how to extract useful knowledge from this overwhelming dataset is eminent. This paper presents and compares two extremely different approaches. The first approach uses as much as possible expert knowledge on how people perceive the sonic environment, the second approach simply considers the spectra obtained every time step as meaningless numbers yet tries to structure them in a meaningful way. The approach based on expert knowledge starts by extracting features that a human listener might use to detect salient sounds and to recognize these sounds. These features are then fed to a recurrent neural network that learns in an unsupervised way to structure and group these features based on co-occurrence and typical sequences. The network is constructed to mimic human auditory processing and includes inhibition and adaptation processes. The outcome of this network is the activation of a set of several hundred neurons. The second approach collects a sequence of one minute of sound spectra (1/8 second time step) and summarizes it using Gaussian mixture models in the frequency-amplitude space. Mean and standard deviation of the set of Gaussians are used for further analysis. In both cases, the outcome is clustered to analyze similarities over space and time as well as to detect outliers. Both approaches are applied on a dataset obtained from 25 measurement nodes during approximately one and a half year in Paris, France. Although the approach based on human listening models is expected to be much more precise when it comes to analyzing and clustering soundscapes, it is also much slower than the blind data analysis
    corecore