606 research outputs found

    Multi-sensor fire detection by fusing visual and non-visual flame features

    Get PDF
    This paper proposes a feature-based multi-sensor fire detector operating on ordinary video and long wave infrared (LWIR) thermal images. The detector automatically extracts hot objects from the thermal images by dynamic background subtraction and histogram-based segmentation. Analogously, moving objects are extracted from the ordinary video by intensity-based dynamic background subtraction. These hot and moving objects are then further analyzed using a set of flame features which focus on the distinctive geometric, temporal and spatial disorder characteristics of flame regions. By combining the probabilities of these fast retrievable visual and thermal features, we are able to detect the fire at an early stage. Experiments with video and LWIR sequences of lire and non-fire real case scenarios show good results in id indicate that multi-sensor fire analysis is very promising

    Silhouette coverage analysis for multi-modal video surveillance

    Get PDF
    In order to improve the accuracy in video-based object detection, the proposed multi-modal video surveillance system takes advantage of the different kinds of information represented by visual, thermal and/or depth imaging sensors. The multi-modal object detector of the system can be split up in two consecutive parts: the registration and the coverage analysis. The multi-modal image registration is performed using a three step silhouette-mapping algorithm which detects the rotation, scale and translation between moving objects in the visual, (thermal) infrared and/or depth images. First, moving object silhouettes are extracted to separate the calibration objects, i.e., the foreground, from the static background. Key components are dynamic background subtraction, foreground enhancement and automatic thresholding. Then, 1D contour vectors are generated from the resulting multi-modal silhouettes using silhouette boundary extraction, cartesian to polar transform and radial vector analysis. Next, to retrieve the rotation angle and the scale factor between the multi-sensor image, these contours are mapped on each other using circular cross correlation and contour scaling. Finally, the translation between the images is calculated using maximization of binary correlation. The silhouette coverage analysis also starts with moving object silhouette extraction. Then, it uses the registration information, i.e., rotation angle, scale factor and translation vector, to map the thermal, depth and visual silhouette images on each other. Finally, the coverage of the resulting multi-modal silhouette map is computed and is analyzed over time to reduce false alarms and to improve object detection. Prior experiments on real-world multi-sensor video sequences indicate that automated multi-modal video surveillance is promising. This paper shows that merging information from multi-modal video further increases the detection results

    Hot topics in video fire analysis

    Get PDF

    Multi-modal video analysis for early fire detection

    Get PDF
    In dit proefschrift worden verschillende aspecten van een intelligent videogebaseerd branddetectiesysteem onderzocht. In een eerste luik ligt de nadruk op de multimodale verwerking van visuele, infrarood en time-of-flight videobeelden, die de louter visuele detectie verbetert. Om de verwerkingskost zo minimaal mogelijk te houden, met het oog op real-time detectie, is er voor elk van het type sensoren een set ’low-cost’ brandkarakteristieken geselecteerd die vuur en vlammen uniek beschrijven. Door het samenvoegen van de verschillende typen informatie kunnen het aantal gemiste detecties en valse alarmen worden gereduceerd, wat resulteert in een significante verbetering van videogebaseerde branddetectie. Om de multimodale detectieresultaten te kunnen combineren, dienen de multimodale beelden wel geregistreerd (~gealigneerd) te zijn. Het tweede luik van dit proefschrift focust zich hoofdzakelijk op dit samenvoegen van multimodale data en behandelt een nieuwe silhouet gebaseerde registratiemethode. In het derde en tevens laatste luik van dit proefschrift worden methodes voorgesteld om videogebaseerde brandanalyse, en in een latere fase ook brandmodellering, uit te voeren. Elk van de voorgestelde technieken voor multimodale detectie en multi-view lokalisatie zijn uitvoerig getest in de praktijk. Zo werden onder andere succesvolle testen uitgevoerd voor de vroegtijdige detectie van wagenbranden in ondergrondse parkeergarages

    Fireground location understanding by semantic linking of visual objects and building information models

    Get PDF
    This paper presents an outline for improved localization and situational awareness in fire emergency situations based on semantic technology and computer vision techniques. The novelty of our methodology lies in the semantic linking of video object recognition results from visual and thermal cameras with Building Information Models (BIM). The current limitations and possibilities of certain building information streams in the context of fire safety or fire incident management are addressed in this paper. Furthermore, our data management tools match higher-level semantic metadata descriptors of BIM and deep-learning based visual object recognition and classification networks. Based on these matches, estimations can be generated of camera, objects and event positions in the BIM model, transforming it from a static source of information into a rich, dynamic data provider. Previous work has already investigated the possibilities to link BIM and low-cost point sensors for fireground understanding, but these approaches did not take into account the benefits of video analysis and recent developments in semantics and feature learning research. Finally, the strengths of the proposed approach compared to the state-of-the-art is its (semi -)automatic workflow, generic and modular setup and multi-modal strategy, which allows to automatically create situational awareness, to improve localization and to facilitate the overall fire understanding

    A multi-modal video analysis approach for car park fire detection

    Get PDF
    In this paper a novel multi-modal flame and smoke detector is proposed for the detection of fire in large open spaces such as car parks. The flame detector is based on the visual and amplitude image of a time-of-flight camera. Using this multi-modal information, flames can be detected very accurately by visual flame feature analysis and amplitude disorder detection. In order to detect the low-cost flame related features, moving objects in visual images are analyzed over time. If an object possesses high probability for each of the flame characteristics, it is labeled as candidate flame region. Simultaneously, the amplitude disorder is also investigated. Also labeled as candidate flame regions are regions with high accumulative amplitude differences and high values in all detail images of the amplitude image's discrete wavelet transform. Finally, when there is overlap of at least one of the visual and amplitude candidate flame regions, fire alarm is raised. The smoke detector, on the other hand, focuses on global changes in the depth images of the time-of-flight camera, which do not have significant impact on the amplitude images. It was found that this behavior is unique for smoke. Experiments show that the proposed detectors improve the accuracy of fire detection in car parks. The flame detector has an average flame detection rate of 93%, with hardly any false positive detection, and the smoke detection rate of the TOF based smoke detector is 88%. © 2012 Elsevier Ltd

    Video fire detection - Review

    Get PDF
    Cataloged from PDF version of article.This is a review article describing the recent developments in Video based Fire Detection (VFD). Video surveillance cameras and computer vision methods are widely used in many security applications. It is also possible to use security cameras and special purpose infrared surveillance cameras for fire detection. This requires intelligent video processing techniques for detection and analysis of uncontrolled fire behavior. VFD may help reduce the detection time compared to the currently available sensors in both indoors and outdoors because cameras can monitor “volumes” and do not have transport delay that the traditional “point” sensors suffer from. It is possible to cover an area of 100 km2 using a single pan-tiltzoom camera placed on a hilltop for wildfire detection. Another benefit of the VFD systems is that they can provide crucial information about the size and growth of the fire, direction of smoke propagation. © 2013 Elsevier Inc. All rights reserve

    Selective combination of visual and thermal imaging for resilient localization in adverse conditions: Day and night, smoke and fire

    Get PDF
    Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions
    • …
    corecore