2,040 research outputs found
Verification of Smoke Detection in Video Sequences Based on Spatio-temporal Local Binary Patterns
AbstractThe early smoke detection in outdoor scenes using video sequences is one of the crucial tasks of modern surveillance systems. Real scenes may include objects that are similar to smoke with dynamic behavior due to low resolution cameras, blurring, or weather conditions. Therefore, verification of smoke detection is a necessary stage in such systems. Verification confirms the true smoke regions, when the regions similar to smoke are already detected in a video sequence. The contributions are two-fold. First, many types of Local Binary Patterns (LBPs) in 2D and 3D variants were investigated during experiments according to changing properties of smoke during fire gain. Second, map of brightness differences, edge map, and Laplacian map were studied in Spatio-Temporal LBP (STLBP) specification. The descriptors are based on histograms, and a classification into three classes such as dense smoke, transparent smoke, and non-smoke was implemented using Kullback-Leibler divergence. The recognition results achieved 96–99% and 86–94% of accuracy for dense smoke in dependence of various types of LPBs and shooting artifacts including noise
Dynamic texture recognition using time-causal and time-recursive spatio-temporal receptive fields
This work presents a first evaluation of using spatio-temporal receptive
fields from a recently proposed time-causal spatio-temporal scale-space
framework as primitives for video analysis. We propose a new family of video
descriptors based on regional statistics of spatio-temporal receptive field
responses and evaluate this approach on the problem of dynamic texture
recognition. Our approach generalises a previously used method, based on joint
histograms of receptive field responses, from the spatial to the
spatio-temporal domain and from object recognition to dynamic texture
recognition. The time-recursive formulation enables computationally efficient
time-causal recognition. The experimental evaluation demonstrates competitive
performance compared to state-of-the-art. Especially, it is shown that binary
versions of our dynamic texture descriptors achieve improved performance
compared to a large range of similar methods using different primitives either
handcrafted or learned from data. Further, our qualitative and quantitative
investigation into parameter choices and the use of different sets of receptive
fields highlights the robustness and flexibility of our approach. Together,
these results support the descriptive power of this family of time-causal
spatio-temporal receptive fields, validate our approach for dynamic texture
recognition and point towards the possibility of designing a range of video
analysis methods based on these new time-causal spatio-temporal primitives.Comment: 29 pages, 16 figure
Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification
Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit
certain stationarity properties in time such as smoke, vegetation and fire. The
analysis of DT is important for recognition, segmentation, synthesis or
retrieval for a range of applications including surveillance, medical imaging
and remote sensing. Deep learning methods have shown impressive results and are
now the new state of the art for a wide range of computer vision tasks
including image and video recognition and segmentation. In particular,
Convolutional Neural Networks (CNNs) have recently proven to be well suited for
texture analysis with a design similar to a filter bank approach. In this
paper, we develop a new approach to DT analysis based on a CNN method applied
on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames
and temporal slices extracted from the DT sequences and combine their outputs
to obtain a competitive DT classifier. Our results on a wide range of commonly
used DT classification benchmark datasets prove the robustness of our approach.
Significant improvement of the state of the art is shown on the larger
datasets.Comment: 19 pages, 10 figure
Project RISE: Recognizing Industrial Smoke Emissions
Industrial smoke emissions pose a significant concern to human health. Prior
works have shown that using Computer Vision (CV) techniques to identify smoke
as visual evidence can influence the attitude of regulators and empower
citizens to pursue environmental justice. However, existing datasets are not of
sufficient quality nor quantity to train the robust CV models needed to support
air quality advocacy. We introduce RISE, the first large-scale video dataset
for Recognizing Industrial Smoke Emissions. We adopted a citizen science
approach to collaborate with local community members to annotate whether a
video clip has smoke emissions. Our dataset contains 12,567 clips from 19
distinct views from cameras that monitored three industrial facilities. These
daytime clips span 30 days over two years, including all four seasons. We ran
experiments using deep neural networks to establish a strong performance
baseline and reveal smoke recognition challenges. Our survey study discussed
community feedback, and our data analysis displayed opportunities for
integrating citizen scientists and crowd workers into the application of
Artificial Intelligence for social good.Comment: Technical repor
BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis
Emergency events involving fire are potentially harmful, demanding a fast and
precise decision making. The use of crowdsourcing image and videos on crisis
management systems can aid in these situations by providing more information
than verbal/textual descriptions. Due to the usual high volume of data,
automatic solutions need to discard non-relevant content without losing
relevant information. There are several methods for fire detection on video
using color-based models. However, they are not adequate for still image
processing, because they can suffer on high false-positive results. These
methods also suffer from parameters with little physical meaning, which makes
fine tuning a difficult task. In this context, we propose a novel fire
detection method for still images that uses classification based on color
features combined with texture classification on superpixel regions. Our method
uses a reduced number of parameters if compared to previous works, easing the
process of fine tuning the method. Results show the effectiveness of our method
of reducing false-positives while its precision remains compatible with the
state-of-the-art methods.Comment: 8 pages, Proceedings of the 28th SIBGRAPI Conference on Graphics,
Patterns and Images, IEEE Pres
Video-based Smoke Detection Algorithms: A Chronological Survey
Over the past decade, several vision-based algorithms proposed in literature have resulted into development of a large number of techniques for detection of smoke and fire from video images. Video-based smoke detection approaches are becoming practical alternatives to the conventional fire detection methods due to their numerous advantages such as early fire detection, fast response, non-contact, absence of spatial limits, ability to provide live video that conveys fire progress information, and capability to provide forensic evidence for fire investigations. This paper provides a chronological survey of different video-based smoke detection methods that are available in literatures from 1998 to 2014.Though the paper is not aimed at performing comparative analysis of the surveyed methods, perceived strengths and weakness of the different methods are identified as this will be useful for future research in video-based smoke or fire detection. Keywords: Early fire detection, video-based smoke detection, algorithms, computer vision, image processing
Directional Dense-Trajectory-based Patterns for Dynamic Texture Recognition
International audienceRepresentation of dynamic textures (DTs), well-known as a sequence of moving textures, is a challenging problem in video analysis due to disorientation of motion features. Analyzing DTs to make them "under-standable" plays an important role in different applications of computer vision. In this paper, an efficient approach for DT description is proposed by addressing the following novel concepts. First, beneficial properties of dense trajectories are exploited for the first time to efficiently describe DTs instead of the whole video. Second, two substantial extensions of Local Vector Pattern operator are introduced to form a completed model which is based on complemented components to enhance its performance in encoding directional features of motion points in a trajectory. Finally, we present a new framework, called Directional Dense Trajectory Patterns , which takes advantage of directional beams of dense trajectories along with spatio-temporal features of their motion points in order to construct dense-trajectory-based descriptors with more robustness. Evaluations of DT recognition on different benchmark datasets (i.e., UCLA, DynTex, and DynTex++) have verified the interest of our proposal
Effective Smoke Detection Using Spatial-Temporal Energy and Weber Local Descriptors in Three Orthogonal Planes (WLD-TOP)
Video-based fire detection (VFD) technologies have received significant attention from both academic and industrial communities recently. However, existing VFD approaches are still susceptible to false alarms due to changes in illumination, camera noise, variability of shape, motion, colour, irregular patterns of smoke and flames, modelling and training inaccuracies. Hence, this work aimed at developing a VSD system that will have a high detection rate, low false-alarm rate and short response time. Moving blocks in video frames were segmented and analysed in HSI colour space, and wavelet energy analysis of the smoke candidate blocks was performed. In addition, Dynamic texture descriptors were obtained using Weber Local Descriptor in Three Orthogonal Planes (WLD-TOP).
These features were combined and used as inputs to Support Vector Classifier with radial based kernel function, while post-processing stage employs temporal image filtering to reduce false alarm. The algorithm was implemented in MATLAB 8.1.0.604 (R2013a). Accuracy of 99.30%, detection rate of 99.28% and false alarm rate of 0.65% were obtained when tested with some online videos. The output of this work would find applications in early fire detection systems and other applications such as robot vision and automated inspection.Facultad de Informátic
- …