297 research outputs found

    Cough Monitoring Through Audio Analysis

    Get PDF
    The detection of cough events in audio recordings requires the analysis of a significant amount of data as cough is typically monitored continuously over several hours to capture naturally occurring cough events. The recorded data is mostly composed of undesired sound events such as silence, background noise, and speech. To reduce computational costs and to address the ethical concerns raised from the collection of audio data in public environments, the data requires pre-processing prior to any further analysis. Current cough detection algorithms typically use pre-processing methods to remove undesired audio segments from the collected data but do not preserve the privacy of individuals being recorded while monitoring respiratory events. This study reveals the need for an automatic pre-processing method that removes sensitive data from the recording prior to any further analysis to ensure privacy preservation of individuals. Specific characteristics of cough sounds can be used to discard sensitive data from audio recordings at a pre-processing stage, improving privacy preservation, and decreasing ethical concerns when dealing with cough monitoring through audio analysis. We propose a pre-processing algorithm that increases privacy preservation and significantly decreases the amount of data to be analysed, by separating cough segments from other non-cough segments, including speech, in audio recordings. Our method verifies the presence of signal energy in both lower and higher frequency regions and discards segments whose energy concentrates only on one of them. The method is iteratively applied on the same data to increase the percentage of data reduction and privacy preservation. We evaluated the performance of our algorithm using several hours of audio recordings with manually pre-annotated cough and speech events. Our results showed that 5 iterations of the proposed method can discard up to 88.94% of the speech content present in the recordings, allowing for a strong privacy preservation while considerably reducing the amount of data to be further analysed by 91.79%. The data reduction and privacy preservation achievements of the proposed pre-processing algorithm offers the possibility to use larger datasets captured in public environments and would beneficiate all cough detection algorithms by preserving the privacy of subjects and by-stander conversations recorded during cough monitoring

    Automated bioacoustics:methods in ecology and conservation and their potential for animal welfare monitoring

    Get PDF
    Vocalizations carry emotional, physiological and individual information. This suggests that they may serve as potentially useful indicators for inferring animal welfare. At the same time, automated methods for analysing and classifying sound have developed rapidly, particularly in the fields of ecology, conservation and sound scene classification. These methods are already used to automatically classify animal vocalizations, for example, in identifying animal species and estimating numbers of individuals. Despite this potential, they have not yet found widespread application in animal welfare monitoring. In this review, we first discuss current trends in sound analysis for ecology, conservation and sound classification. Following this, we detail the vocalizations produced by three of the most important farm livestock species: chickens (Gallus gallus domesticus), pigs (Sus scrofa domesticus) and cattle (Bos taurus). Finally, we describe how these methods can be applied to monitor animal welfare with new potential for developing automated methods for large-scale farming

    Technology for Hearing Evaluation

    Get PDF

    Modeling huge sound sources in a room acoustical calculation program

    Get PDF
    • …
    corecore