8,238 research outputs found

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    On the Analysis and Detection of Flames Withan Asynchronous Spiking Image Sensor

    Get PDF
    We have investigated the capabilities of a customasynchronous spiking image sensor operating in the NearInfrared band to study flame radiation emissions, monitortheir transient activity, and detect their presence. Asynchronoussensors have inherent capabilities, i.e., good temporal resolution,high dynamic range, and low data redundancy. This makesthem competitive against infrared (IR) cameras and CMOSframe-based NIR imagers. In this paper, we analyze, discuss,and compare the experimental data measured with our sensoragainst results obtained with conventional devices. A set ofmeasurements have been taken to study the flame emissionlevels and their transient variations. Moreover, a flame detectionalgorithm, adapted to our sensor asynchronous outputs, has beendeveloped. Results show that asynchronous spiking sensors havean excellent potential for flame analysis and monitoring.Universidad de Cádiz PR2016-07Ministerio de Economía y Competitividad TEC2015-66878-C3-1-RJunta de Andalucía TIC 2012-2338Office of Naval Research (USA) N00014141035

    Evaluation of Deep Learning-Based Segmentation Methods for Industrial Burner Flames

    Get PDF
    The energetic usage of fuels from renewable sources or waste material is associated with controlled combustion processes with industrial burner equipment. For the observation of such processes, camera systems are increasingly being used. With additional completion by an appropriate image processing system, camera observation of controlled combustion can be used for closed-loop process control giving leverage for optimization and more efficient usage of fuels. A key element of a camera-based control system is the robust segmentation of each burners flame. However, flame instance segmentation in an industrial environment imposes specific problems for image processing, such as overlapping flames, blurry object borders, occlusion, and irregular image content. In this research, we investigate the capability of a deep learning approach for the instance segmentation of industrial burner flames based on example image data from a special waste incineration plant. We evaluate the segmentation quality and robustness in challenging situations with several convolutional neural networks and demonstrate that a deep learning-based approach is capable of producing satisfying results for instance segmentation in an industrial environment

    Deep Learning approach applied to drone imagery for the automatic detection of forest fire

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesWildfires are one of the world's most costly and deadly natural disasters, damaging millions of hectares of vegetation and threatening the lives of people and animals. The risks to civilian agents and task forces are particularly high, which emphasizes the value of leveraging technology to minimize their impacts on nature and people. The use of drone imagery coupled with deep learning for automated fire detection can provide new solutions to this problem, limiting the damage that result. In this context, our work aims to implement a solution for the automatic detection of forest fires in real time by exploiting convolutional neural networks (CNN) on drone images based on classification and segmentation models. The methodological approach followed in this study can be broken down into three main steps: First, the comparison of two models, namely Xception Network and EfficientNetB2, for the classification of images captured during a forest burn into 'Fire' or 'No_Fire' classes. Then we will proceed to the segmentation of the images belonging to the 'Fire' class by comparing the U-Net architecture with Attention U-Net and Trans U-Net in order to choose the best performing model. The EfficientNetB2 architecture for classification gave satisfactory results with an accuracy of 71.72%. Concerning segmentation, we adopted the U-Net model which offers a segmentation accuracy that reaches 98%. As for the deployment, a fire detection application was designed using Android Studio software by assimilating the drone's camera

    Experimental Exploration of Compact Convolutional Neural Network Architectures for Non-temporal Real-time Fire Detection

    Get PDF
    In this work we explore different Convolutional Neural Network (CNN) architectures and their variants for non-temporal binary fire detection and localization in video or still imagery. We consider the performance of experimentally defined, reduced complexity deep CNN architectures for this task and evaluate the effects of different optimization and normalization techniques applied to different CNN architectures (spanning the Inception, ResNet and EfficientNet architectural concepts). Contrary to contemporary trends in the field, our work illustrates a maximum overall accuracy of 0.96 for full frame binary fire detection and 0.94 for superpixel localization using an experimentally defined reduced CNN architecture based on the concept of InceptionV4. We notably achieve a lower false positive rate of 0.06 compared to prior work in the field presenting an efficient, robust and real-time solution for fire region detection
    corecore