564 research outputs found

    SMILE: Smart Monitoring IoT Learning Ecosystem

    Get PDF
    In industrial contexts to date, there are several solutions to monitor and intervene in case of anomalies and/or failures. Using a classic approach to cover all the requirements needed in the industrial field, different solutions should be implemented for different monitoring platforms, covering the required end-to-end. The classic cause-effect association process in the field of industrial monitoring requires thorough understanding of the monitored ecosystem and the main characteristics triggering the detected anomalies. In these cases, complex decision-making systems are in place often providing poor results. This paper introduces a new approach based on an innovative industrial monitoring platform, which has been denominated SMILE. It allows offering an automatic service of global modern industry performance monitoring, giving the possibility to create, by setting goals, its own machine/deep learning models through a web dashboard from which one can view the collected data and the produced results.  Thanks to an unsupervised approach the SMILE platform can understand which the linear and non-linear correlations are representing the overall state of the system to predict and, therefore, report abnormal behavior

    Self-supervised and semi-supervised learning for road condition estimation from distributed road-side cameras

    Get PDF
    Monitoring road conditions, e.g., water build-up due to intense rainfall, plays a fundamental role in ensuring road safety while increasing resilience to the effects of climate change. Distributed cameras provide an easy and affordable alternative to instrumented weather stations, enabling diffused and capillary road monitoring. Here, we propose a deep learning-based solution to automatically detect wet road events in continuous video streams acquired by road-side surveillance cameras. Our contribution is two-fold: first, we employ a convolutional Long Short-Term Memory model (convLSTM) to detect subtle changes in the road appearance, introducing a novel temporally consistent data augmentation to increase robustness to outdoor illumination conditions. Second, we present a contrastive self-supervised framework that is uniquely tailored to surveillance camera networks. The proposed technique was validated on a large-scale dataset comprising roughly 2000 full day sequences (roughly 400K video frames, of which 300K unlabelled), acquired from several road-side cameras over a span of two years. Experimental results show the effectiveness of self-supervised and semi-supervised learning, increasing the frame classification performance (measured by the Area under the ROC curve) from 0.86 to 0.92. From the standpoint of event detection, we show that incorporating temporal features through a convLSTM model both improves the detection rate of wet road events (+10%) and reduces false positive alarms (–45%). The proposed techniques could benefit also other tasks related to weather analysis from road-side and vehicle-mounted cameras

    Speech Quality Classifier Model based on DBN that Considers Atmospheric Phenomena

    Get PDF
    Current implementations of 5G networks consider higher frequency range of operation than previous telecommunication networks, and it is possible to offer higher data rates for different applications. On the other hand, atmospheric phenomena could have a more negative impact on the transmission quality. Thus, the study of the transmitted signal quality at high frequencies is relevant to guaranty the user ́s quality of experience. In this research, the recommendations ITU-R P.838-3 and ITU-R P.676-11 are implemented in a network scenario, which are methodologies to estimate the signal degradations originated by rainfall and atmospheric gases, respectively. Thus, speech signals are encoded by the AMR-WB codec, transmitted and the perceptual speech quality is evaluated using the algorithm described in ITU-T Rec. P.863, mostly known as POLQA. The novelty of this work is to propose a non-intrusive speech quality classifier that considers atmospheric phenomena. This classifier is based on Deep Belief Networks (DBN) that uses Support Vector Machine (SVM) with radial basis function kernel (RBF-SVM) as classifier, to identify five predefined speech quality classes. Experimental Results show that the proposed speech quality classifier reached an accuracy between 92% and 95% for each quality class overcoming the results obtained by the sole non-intrusive standard described in ITU-T Recommendation P.563. Furthermore, subjective tests are carried out to validate the proposed classifier performance, and it reached an accuracy of 94.8%

    Automatic Detection and Compression for Passive Acoustic Monitoring of the African Forest Elephant

    Full text link
    In this work, we consider applying machine learning to the analysis and compression of audio signals in the context of monitoring elephants in sub-Saharan Africa. Earth's biodiversity is increasingly under threat by sources of anthropogenic change (e.g. resource extraction, land use change, and climate change) and surveying animal populations is critical for developing conservation strategies. However, manually monitoring tropical forests or deep oceans is intractable. For species that communicate acoustically, researchers have argued for placing audio recorders in the habitats as a cost-effective and non-invasive method, a strategy known as passive acoustic monitoring (PAM). In collaboration with conservation efforts, we construct a large labeled dataset of passive acoustic recordings of the African Forest Elephant via crowdsourcing, compromising thousands of hours of recordings in the wild. Using state-of-the-art techniques in artificial intelligence we improve upon previously proposed methods for passive acoustic monitoring for classification and segmentation. In real-time detection of elephant calls, network bandwidth quickly becomes a bottleneck and efficient ways to compress the data are needed. Most audio compression schemes are aimed at human listeners and are unsuitable for low-frequency elephant calls. To remedy this, we provide a novel end-to-end differentiable method for compression of audio signals that can be adapted to acoustic monitoring of any species and dramatically improves over naive coding strategies
    corecore