5 research outputs found

    Automatic detection of cow/calf vocalizations in free-stall barn

    Get PDF
    Precision livestock farming dictates the use of advanced technologies to understand, analyze, assess and finally optimize a farm\u2019s production collectively as well as the contribution of each single animal. This work is part of a research project wishing to steer the dairy farms\u2019 producers to more ethical rearing systems. To study cow\u2019s welfare, we focus on reciprocal vocalizations including mother-offspring contact calls. We show the set-up of a suitable audio capturing system composed of automated recording units and propose an algorithm to automatically detect cow vocalizations in an indoor farm setting. More specifically, the algorithm has a two-level structure: a) first, the Hilbert follower is applied to segment the raw audio signals, and b) second the detected blocks of acoustic activity are refined via a classification scheme based on hidden Markov models. After thorough evaluation, we demonstrate excellent detection results in terms of false positives, false negatives and confusion matrix

    A Classification Scheme Based on Directed Acyclic Graphs for Acoustic Farm Monitoring

    Get PDF
    Intelligent farming as part of the green revolution is advancing the world of agriculture in such a way that farms become evolving, with the scope being the optimization of animal production in an eco-friendly way. In this direction, we propose exploiting the acoustic modality for farm monitoring. Such infor- mation could be used in a stand-alone or complimentary mode to monitor constantly animal population and behavior. To this end, we designed a scheme classifying the vocalizations produced by farm animals. More precisely, we propose a directed acyclic graph, where each node carries out a binary classification task using hidden Markov models. The topological ordering follows a criterion derived from the Kullback-Leibler divergence. During the experimental phase, we employed a publicly available dataset including vocalizations of seven animals typically encountered in farms, where we report promising recognition rates outperform- ing state of the art classifiers

    Transfer Learning for Improved Audio-Based Human Activity Recognition

    Get PDF
    Human activities are accompanied by characteristic sound events, the processing of which might provide valuable information for automated human activity recognition. This paper presents a novel approach addressing the case where one or more human activities are associated with limited audio data, resulting in a potentially highly imbalanced dataset. Data augmentation is based on transfer learning; more specifically, the proposed method: (a) identifies the classes which are statistically close to the ones associated with limited data; (b) learns a multiple input, multiple output transformation; and (c) transforms the data of the closest classes so that it can be used for modeling the ones associated with limited data. Furthermore, the proposed framework includes a feature set extracted out of signal representations of diverse domains, i.e., temporal, spectral, and wavelet. Extensive experiments demonstrate the relevance of the proposed data augmentation approach under a variety of generative recognition schemes

    Automatic Classification of Cat Vocalizations Emitted in Different Contexts

    Get PDF
    Cats employ vocalizations for communicating information, thus their sounds can carry a wide range of meanings. Concerning vocalization, an aspect of increasing relevance directly connected with the welfare of such animals is its emotional interpretation and the recognition of the production context. To this end, this work presents a proof of concept facilitating the automatic analysis of cat vocalizations based on signal processing and pattern recognition techniques, aimed at demonstrating if the emission context can be identified by meowing vocalizations, even if recorded in sub-optimal conditions. We rely on a dataset including vocalizations of Maine Coon and European Shorthair breeds emitted in three different contexts: waiting for food, isolation in unfamiliar environment, and brushing. Towards capturing the emission context, we extract two sets of acoustic parameters, i.e., mel-frequency cepstral coefficients and temporal modulation features. Subsequently, these are modeled using a classification scheme based on a directed acyclic graph dividing the problem space. The experiments we conducted demonstrate the superiority of such a scheme over a series of generative and discriminative classification solutions. These results open up new perspectives for deepening our knowledge of acoustic communication between humans and cats and, in general, between humans and animals

    A Novel Holistic Modeling Approach for Generalized Sound Recognition

    No full text
    corecore