1,182 research outputs found

    CUDA based implementation of flame detection algorithms in day and infrared camera videos

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Graduate School of Engineering and Science of Bilkent University, 2011.Thesis (Master's) -- Bilkent University, 2011.Includes bibliographical references leaves 52-54.Automatic fire detection in videos is an important task but it is a challenging problem. Video based high performance fire detection algorithms are important for the detection of forest fires. The usage area of fire detection algorithms can further be extended to the places like state and heritage buildings, in which surveillance cameras are installed. In uncontrolled fires, early detection is crucial to extinguish the fire immediately. However, most of the current fire detection algorithms either suffer from high false alarm rates or low detection rates due to the optimization constraints for real-time performance. This problem is also aggravated by the high computational complexity in large areas, where multicamera surveillance is required. In this study, our aim is to speed up the existing color video fire detection algorithms by implementing in CUDA, which uses the parallel computational power of Graphics Processing Units (GPU). Our method does not only speed up the existing algorithms but it can also reduce the optimization constraints for real-time performance to increase detection probability without affecting false alarm rates. In addition, we have studied several methods that detect flames in infrared video and proposed an improvement for the algorithm to decrease the false alarm rate and increase the detection rate of the fire.Hamzaçebi, HasanM.S

    Survey Paper of Approaches for Real Time Fire Detection

    Get PDF
    Accidental fire always causes great loss. If that fire is detected in time, then loss can be minimized. Hence there should be more efficient systems to avoid losses. Most of the fire detection systems are based on sensors. These sensors give false alarms in case of cigarette or essence sticks are burnt and these systems are also quite costly. By using fire detection system through video surveillance cameras the cost of system can be reduced. The videos achieved by popular surveillance cameras are analysed and different topologies of information, respectively based on colour and movement are united into a multi expert system in order to increase the overall accuracy of the approach, making it possible its usage in real time applications. The systems use HSV, HSL, YUV models. In these systems, the models are based on colour, motion and shape. The approaches have been tested on a wide database with the aim of assessing its performance both in terms of sensitivity and specificity

    Video fire detection - Review

    Get PDF
    Cataloged from PDF version of article.This is a review article describing the recent developments in Video based Fire Detection (VFD). Video surveillance cameras and computer vision methods are widely used in many security applications. It is also possible to use security cameras and special purpose infrared surveillance cameras for fire detection. This requires intelligent video processing techniques for detection and analysis of uncontrolled fire behavior. VFD may help reduce the detection time compared to the currently available sensors in both indoors and outdoors because cameras can monitor “volumes” and do not have transport delay that the traditional “point” sensors suffer from. It is possible to cover an area of 100 km2 using a single pan-tiltzoom camera placed on a hilltop for wildfire detection. Another benefit of the VFD systems is that they can provide crucial information about the size and growth of the fire, direction of smoke propagation. © 2013 Elsevier Inc. All rights reserve

    Entropy-Functional-Based Online Adaptive Decision Fusion Framework with Application to Wildfire Detection in Video

    Get PDF
    Cataloged from PDF version of article.In this paper, an entropy-functional-based online adaptive decision fusion (EADF) framework is developed for image analysis and computer vision applications. In this framework, it is assumed that the compound algorithm consists of several subalgorithms, each of which yields its own decision as a real number centered around zero, representing the confidence level of that particular subalgorithm. Decision values are linearly combined with weights that are updated online according to an active fusion method based on performing entropic projections onto convex sets describing subalgorithms. It is assumed that there is an oracle, who is usually a human operator, providing feedback to the decision fusion method. A video-based wildfire detection system was developed to evaluate the performance of the decision fusion algorithm. In this case, image data arrive sequentially, and the oracle is the security guard of the forest lookout tower, verifying the decision of the combined algorithm. The simulation results are presented

    Deep Convolutional Generative Adversarial Networks Based Flame Detection in Video

    Full text link
    Real-time flame detection is crucial in video based surveillance systems. We propose a vision-based method to detect flames using Deep Convolutional Generative Adversarial Neural Networks (DCGANs). Many existing supervised learning approaches using convolutional neural networks do not take temporal information into account and require substantial amount of labeled data. In order to have a robust representation of sequences with and without flame, we propose a two-stage training of a DCGAN exploiting spatio-temporal flame evolution. Our training framework includes the regular training of a DCGAN with real spatio-temporal images, namely, temporal slice images, and noise vectors, and training the discriminator separately using the temporal flame images without the generator. Experimental results show that the proposed method effectively detects flame in video with negligible false positive rates in real-time

    A framework based on Gaussian mixture models and Kalman filters for the segmentation and tracking of anomalous events in shipboard video

    Get PDF
    Anomalous indications in monitoring equipment on board U.S. Navy vessels must be handled in a timely manner to prevent catastrophic system failure. The development of sensor data analysis techniques to assist a ship\u27s crew in monitoring machinery and summon required ship-to-shore assistance is of considerable benefit to the Navy. In addition, the Navy has a large interest in the development of distance support technology in its ongoing efforts to reduce manning on ships. In this thesis, algorithms have been developed for the detection of anomalous events that can be identified from the analysis of monochromatic stationary ship surveillance video streams. The specific anomalies that we have focused on are the presence and growth of smoke and fire events inside the frames of the video stream. The algorithm consists of the following steps. First, a foreground segmentation algorithm based on adaptive Gaussian mixture models is employed to detect the presence of motion in a scene. The algorithm is adapted to emphasize gray-level characteristics related to smoke and fire events in the frame. Next, shape discriminant features in the foreground are enhanced using morphological operations. Following this step, the anomalous indication is tracked between frames using Kalman filtering. Finally, gray level shape and motion features corresponding to the anomaly are subjected to principal component analysis and classified using a multilayer perceptron neural network. The algorithm is exercised on 68 video streams that include the presence of anomalous events (such as fire and smoke) and benign/nuisance events (such as humans walking the field of view). Initial results show that the algorithm is successful in detecting anomalies in video streams, and is suitable for application in shipboard environments

    Flame detection method in video using covariance descriptors

    Get PDF
    Video fire detection system which uses a spatio-temporal covariance matrix of video data is proposed. This system divides the video into spatio-temporal blocks and computes covariance features extracted from these blocks to detect fire. Feature vectors taking advantage of both the spatial and the temporal characteristics of flame colored regions are classified using an SVM classifier which is trained and tested using video data containing flames and flame colored objects. Experimental results are presented. © 2011 IEEE

    Fire detection of Unmanned Aerial Vehicle in a Mixed Reality-based System

    Get PDF
    This paper proposes the employment of a low-cost Micro-electro-mechanical system including; inertial measurement unit (IMU), a consumer-grade digital camera and a fire detection algorithm with a nano unmanned aerial vehicle for inspection application. The video stream (monocular camera) and navigation data (IMU) rely on state-of-the-art indoor/outdoor navigation system. The system combines robotic operating system and computer vision techniques to render metric scale of monocular vision and gravity observable to provide robust, accurate and novel inter-frame motion estimates. The collected onboard data are communicated to the ground station and processed using a Simultaneous Localisation and Mapping (SLAM) system. A robust and efficient re-localisation SLAM was performed to recover from tracking failure, motion blur and frame lost in the received data. The fire detection algorithm was deployed based on the colour, movement attributes, temporal variation of fire's intensity and its accumulation around a point. A cumulative time derivative matrix was used to detect areas with fire's high-frequency luminance flicker (random characteristic) to analyse the frame-by-frame changes. We considered colour, surface coarseness, boundary roughness and skewness features while the quadrotor flies autonomously within clutter and congested areas. Mixed Reality system was adopted to visualise and test the proposed system in a physical/virtual environment. The results showed that the UAV could successfully detect fire and flame, fly towards and hover around it, communicate with the ground station and generate SLAM system
    • 

    corecore