87 research outputs found

    A Concept Drift-Aware DAG-Based Classification Scheme for Acoustic Monitoring of Farms

    Get PDF
    Intelligent farming as part of the green revolution is advancing the world of agriculture in such a way that farms become dynamic, with the overall scope being the optimization of animal production in an eco-friendly way. In this direction, this study proposes exploiting the acoustic modality for farm monitoring. Such information could be used in a stand-alone or complimentary mode to monitor the farm constantly at a great level of detail. To this end, the authors designed a scheme classifying the vocalizations produced by farm animals. More precisely, a directed acyclic graph was proposed, where each node carries out a binary classification task using hidden Markov models. The topological ordering follows a criterion derived from the Kullback-Leibler divergence. In addition, a transfer learning-based module for handling concept drifts was proposed. During the experimental phase, the authors employed a publicly available dataset including vocalizations of seven animals typically encountered in farms, where promising recognition rates were reported

    Transfer Learning for Improved Audio-Based Human Activity Recognition

    Get PDF
    Human activities are accompanied by characteristic sound events, the processing of which might provide valuable information for automated human activity recognition. This paper presents a novel approach addressing the case where one or more human activities are associated with limited audio data, resulting in a potentially highly imbalanced dataset. Data augmentation is based on transfer learning; more specifically, the proposed method: (a) identifies the classes which are statistically close to the ones associated with limited data; (b) learns a multiple input, multiple output transformation; and (c) transforms the data of the closest classes so that it can be used for modeling the ones associated with limited data. Furthermore, the proposed framework includes a feature set extracted out of signal representations of diverse domains, i.e., temporal, spectral, and wavelet. Extensive experiments demonstrate the relevance of the proposed data augmentation approach under a variety of generative recognition schemes

    Resolving the identification of weak-flying insects during flight: a coupling between rigorous data processing and biology

    Get PDF
    1. Bioacoustic methods play an increasingly important role for the detection of insects in a range of surveillance and monitoring programs. 2. Weak-flying insects evade detection because they do not yield sufficient audio information to capture wingbeat and harmonic frequencies. These inaudible insects often pose a significant threat to food security as pests of key agricultural crops worldwide. 3. Automatic detection of such insects is crucial to the future of crop protection by providing critical information to assess the risk to a crop and the need for preventative measures. 4. We describe an experimental setup designed to derive audio recordings from a range of weak-flying aphids and beetles using an LED array. 5. A rigorous data processing pipeline was developed to extract meaningful features, linked to morphological characteristics, from the audio and harmonic series for six aphid and two beetle species. 6. An ensemble of over 50 bioacoustic parameters was used to achieve species discrimination with a success rate of 80%. The inclusion of the dominant and fundamental frequencies improved prediction between beetles and aphids due to large differences in wingbeat frequencies. 7. At the species level, error rates were minimised when harmonic features were supplemented by features indicative of differences in species’ flight energies

    A Multispectral Backscattered Light Recorder of Insects’ Wingbeats

    Get PDF
    Most reported optical recorders of the wingbeat of insects are based on the so-called extinction light, which is the variation of light in the receiver due to the cast shadow of the insect\u2019s wings and main body. In this type of recording devices, the emitter uses light and is placed opposite to the receiver, which is usually a single (or multiple) photodiode. In this work, we present a different kind of wingbeat sensor and its associated recorder that aims to extract a deeper representational signal of the wingbeat event and color characterization of the main body of the insect, namely: a) we record the backscattered light that is richer in harmonics than the extinction light, b) we use three different spectral bands, i.e., a multispectral approach that aims to grasp the melanization and microstructural and color features of the wing and body of the insects, and c) we average at the receiver\u2019s level the backscattered signal from many LEDs that illuminate the wingbeating insect from multiple orientations and thus offer a smoother and more complete signal than one based on a single snapshot. We present all the necessary details to reproduce the device and we analyze many insects of interest like the bee Apis mellifera, the wasp Polistes gallicus, and some insects whose wingbeating characteristics are pending in the current literature, like Drosophila suzukii and Zaprionus, another member of the drosophilidae family

    Classifying Flies Based on Reconstructed Audio Signals

    Get PDF
    Advancements in sensor technology and processing power have made it possible to create recording equipment that can reconstruct the audio signal of insects passing through a directed infrared beam. The widespread deployment of such devices would allow for a range of applications previously not practical. A sensor net of detectors could be used to help model population dynamics, assess the efficiency of interventions and serve as an early warning system. At the core of any such system is a classification problem: given a segment of audio collected as something passes through a sensor, can we classify it? We examine the case of detecting the presence of fly species, with a particular focus on mosquitoes. This gives rise to a range of problems such as: can we discriminate between species of fly? Can we detect different species of mosquito? Can we detect the sex of the insect? Automated classification would significantly improve the effectiveness and efficiency of vector monitoring using these sensor nets. We assess a range of time series classification (TSC) algorithms on data from two projects working in this area. We assess our prior belief that spectral features are most effective, and we remark on all approaches with respect to whether they can be considered ``real-time''

    Preservation and Promotion of Opera Cultural Heritage: The Experience of La Scala Theatre

    Get PDF
    This paper focuses on music and music-related cultural heritage typically preserved by opera houses, starting from the experience achieved during the long-lasting collaboration between La Scala theater and the Laboratory of Music Informatics of the University of Milan. First, we will mention the most significant results achieved by the project in the fields of preservation, information retrieval and dissemination of cultural heritage through computer-based approaches. Moreover, we will discuss the possibilities offered by new technologies applied to the conservative context of an opera house, including: the multi-layer representation of music information to foster the accessibility of musical content also by non-experts; the adoption of 5G networks to deliver spherical videos of live events, thus opening new scenarios for cultural heritage enjoyment and dissemination; deep learning approaches both to improve internal processes (e.g., back-office applications for music information retrieval) and to offer advanced services to users (e.g., highly-customized experiences)

    BNN27, a 17-Spiroepoxy Steroid Derivative, Interacts With and Activates p75 Neurotrophin Receptor, Rescuing Cerebellar Granule Neurons from Apoptosis

    Get PDF
    Neurotrophin receptors mediate a plethora of signals affecting neuronal survival. The p75 pan-neurotrophin receptor controls neuronal cell fate after its selective activation by immature and mature isoforms of all neurotrophins. It also exerts pleiotropic effects interacting with a variety of ligands in different neuronal or non-neuronal cells. In the present study, we explored the biophysical and functional interactions of a bloodbrain-barrier (BBB) permeable, C17-spiroepoxy steroid derivative, BNN27, with p75NTR receptor. BNN27 was recently shown to bind to NGF high-affinity receptor, TrkA. We now tested the p75NTR-mediated effects of BNN27 in mouse Cerebellar Granule Neurons (CGNs), expressing p75NTR, but not TrkA receptors. Our findings show that BNN27 physically interacts with p75NTR receptors in specific amino-residues of its extracellular domain, inducing the recruitment of p75NTR receptor to its effector protein RIP2 and the simultaneous release of RhoGDI in primary neuronal cells. Activation of the p75NTR receptor by BNN27 reverses serum deprivation-induced apoptosis of CGNs resulting in the decrease of the phosphorylation of pro-apoptotic JNK kinase and of the cleavage of Caspase-3, effects completely abolished in CGNs, isolated from p75NTR null mice. In conclusion, BNN27 represents a lead molecule for the development of novel p75NTR ligands, controlling specific p75NTR-mediated signaling of neuronal cell fate, with potential applications in therapeutics of neurodegenerative diseases and brain traum

    Call recognition and individual identification of fish vocalizations based on automatic speech recognition: An example with the Lusitanian toadfish

    Get PDF
    The study of acoustic communication in animals often requires not only the recognition of species specific acoustic signals but also the identification of individual subjects, all in a complex acoustic background. Moreover, when very long recordings are to be analyzed, automatic recognition and identification processes are invaluable tools to extract the relevant biological information. A pattern recognition methodology based on hidden Markov models is presented inspired by successful results obtained in the most widely known and complex acoustical communication signal: human speech. This methodology was applied here for the first time to the detection and recognition of fish acoustic signals, specifically in a stream of round-the-clock recordings of Lusitanian toadfish (Halobatrachus didactylus) in their natural estuarine habitat. The results show that this methodology is able not only to detect the mating sounds (boatwhistles) but also to identify individual male toadfish, reaching an identification rate of ca. 95%. Moreover this method also proved to be a powerful tool to assess signal durations in large data sets. However, the system failed in recognizing other sound types.Fundação para a Ciência e a Tecnologia (FCT
    • …
    corecore