336 research outputs found

    Project RISE: Recognizing Industrial Smoke Emissions

    Full text link
    Industrial smoke emissions pose a significant concern to human health. Prior works have shown that using Computer Vision (CV) techniques to identify smoke as visual evidence can influence the attitude of regulators and empower citizens to pursue environmental justice. However, existing datasets are not of sufficient quality nor quantity to train the robust CV models needed to support air quality advocacy. We introduce RISE, the first large-scale video dataset for Recognizing Industrial Smoke Emissions. We adopted a citizen science approach to collaborate with local community members to annotate whether a video clip has smoke emissions. Our dataset contains 12,567 clips from 19 distinct views from cameras that monitored three industrial facilities. These daytime clips span 30 days over two years, including all four seasons. We ran experiments using deep neural networks to establish a strong performance baseline and reveal smoke recognition challenges. Our survey study discussed community feedback, and our data analysis displayed opportunities for integrating citizen scientists and crowd workers into the application of Artificial Intelligence for social good.Comment: Technical repor

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    Deep Convolutional Generative Adversarial Networks Based Flame Detection in Video

    Full text link
    Real-time flame detection is crucial in video based surveillance systems. We propose a vision-based method to detect flames using Deep Convolutional Generative Adversarial Neural Networks (DCGANs). Many existing supervised learning approaches using convolutional neural networks do not take temporal information into account and require substantial amount of labeled data. In order to have a robust representation of sequences with and without flame, we propose a two-stage training of a DCGAN exploiting spatio-temporal flame evolution. Our training framework includes the regular training of a DCGAN with real spatio-temporal images, namely, temporal slice images, and noise vectors, and training the discriminator separately using the temporal flame images without the generator. Experimental results show that the proposed method effectively detects flame in video with negligible false positive rates in real-time

    Fully Convolutional Variational Autoencoder For Feature Extraction Of Fire Detection System

    Get PDF
    This paper proposes a fully convolutional variational autoencoder (VAE) for features extraction from a large-scale dataset of fire images. The dataset will be used to train the deep learning algorithm to detect fire and smoke. The features extraction is used to tackle the curse of dimensionality, which is the common issue in training deep learning with huge datasets. Features extraction aims to reduce the dimension of the dataset significantly without losing too much essential information. Variational autoencoders (VAEs) are powerfull generative model, which can be used for dimension reduction. VAEs work better than any other methods available for this purpose because they can explore variations on the data in a specific direction

    Development and Application of Fire Video Image Detection Technology in China’s Road Tunnels

    Get PDF
    A large number of highway tunnels, urban road tunnels and underwater tunnels have been constructed throughout China over the last two decades. With the rapid increase in vehicle traffic, the number of fire incidents in road tunnels have also substantially increased. This paper aims to review the development and application of fire video image detection (VID) technology and their impact on fire safety in China’s road tunnels. The challenges of fire safety in China’s road tunnels are analyzed. The capabilities and limitations of fire detection technologies currently used in China’s road tunnels are discussed. The research and development of fire VID technology in road tunnels, including various detection algorithms, evolution of VID systems and evaluation of their performances in various tunnel tests are reviewed. Some cases involving VID applications in China’s road tunnels are reported. The studies show that the fire VID systems have unique features in providing fire protection and their detection capability and reliability have been enhanced over the decades with the advance in detection algorithms, hardware and integration with other tunnel systems. They have become an important safety system in China’s road tunnels

    An intelligent video fire detection approach based on object detection technology

    Get PDF
    PresentationFire that is one of the most serious accidents in chemical factories, may lead to considerable product losses, equipment damages and casualties. With the rapid development of computer vision technology, intelligent fire detection has been proposed and applied in various scenarios. This paper presents a new intelligent video fire detection approach based on object detection technology using convolutional neural networks (CNN). First, a CNN model is trained for the fire detection task which is framed as a regression problem to predict bounding boxes and associated probabilities. In the application phase, videos from surveillance cameras are detected frame by frame. Once fire appears in the current frame, the model will output the coordinates of the fire region. Simultaneously, the frame where the fire region is localized will be immediately sent to safety supervisors as a fire alarm. This will help detect fire at the early stage, prevent fire spreading and improve the emergency response
    • …
    corecore