702 research outputs found

    Verification of Smoke Detection in Video Sequences Based on Spatio-temporal Local Binary Patterns

    Get PDF
    AbstractThe early smoke detection in outdoor scenes using video sequences is one of the crucial tasks of modern surveillance systems. Real scenes may include objects that are similar to smoke with dynamic behavior due to low resolution cameras, blurring, or weather conditions. Therefore, verification of smoke detection is a necessary stage in such systems. Verification confirms the true smoke regions, when the regions similar to smoke are already detected in a video sequence. The contributions are two-fold. First, many types of Local Binary Patterns (LBPs) in 2D and 3D variants were investigated during experiments according to changing properties of smoke during fire gain. Second, map of brightness differences, edge map, and Laplacian map were studied in Spatio-Temporal LBP (STLBP) specification. The descriptors are based on histograms, and a classification into three classes such as dense smoke, transparent smoke, and non-smoke was implemented using Kullback-Leibler divergence. The recognition results achieved 96–99% and 86–94% of accuracy for dense smoke in dependence of various types of LPBs and shooting artifacts including noise

    Dynamic texture recognition using time-causal and time-recursive spatio-temporal receptive fields

    Full text link
    This work presents a first evaluation of using spatio-temporal receptive fields from a recently proposed time-causal spatio-temporal scale-space framework as primitives for video analysis. We propose a new family of video descriptors based on regional statistics of spatio-temporal receptive field responses and evaluate this approach on the problem of dynamic texture recognition. Our approach generalises a previously used method, based on joint histograms of receptive field responses, from the spatial to the spatio-temporal domain and from object recognition to dynamic texture recognition. The time-recursive formulation enables computationally efficient time-causal recognition. The experimental evaluation demonstrates competitive performance compared to state-of-the-art. Especially, it is shown that binary versions of our dynamic texture descriptors achieve improved performance compared to a large range of similar methods using different primitives either handcrafted or learned from data. Further, our qualitative and quantitative investigation into parameter choices and the use of different sets of receptive fields highlights the robustness and flexibility of our approach. Together, these results support the descriptive power of this family of time-causal spatio-temporal receptive fields, validate our approach for dynamic texture recognition and point towards the possibility of designing a range of video analysis methods based on these new time-causal spatio-temporal primitives.Comment: 29 pages, 16 figure

    Convolutional Neural Network on Three Orthogonal Planes for Dynamic Texture Classification

    Get PDF
    Dynamic Textures (DTs) are sequences of images of moving scenes that exhibit certain stationarity properties in time such as smoke, vegetation and fire. The analysis of DT is important for recognition, segmentation, synthesis or retrieval for a range of applications including surveillance, medical imaging and remote sensing. Deep learning methods have shown impressive results and are now the new state of the art for a wide range of computer vision tasks including image and video recognition and segmentation. In particular, Convolutional Neural Networks (CNNs) have recently proven to be well suited for texture analysis with a design similar to a filter bank approach. In this paper, we develop a new approach to DT analysis based on a CNN method applied on three orthogonal planes x y , xt and y t . We train CNNs on spatial frames and temporal slices extracted from the DT sequences and combine their outputs to obtain a competitive DT classifier. Our results on a wide range of commonly used DT classification benchmark datasets prove the robustness of our approach. Significant improvement of the state of the art is shown on the larger datasets.Comment: 19 pages, 10 figure

    Video-based Smoke Detection Algorithms: A Chronological Survey

    Get PDF
    Over the past decade, several vision-based algorithms proposed in literature have resulted into development of a large number of techniques for detection of smoke and fire from video images. Video-based smoke detection approaches are becoming practical alternatives to the conventional fire detection methods due to their numerous advantages such as early fire detection, fast response, non-contact, absence of spatial limits, ability to provide live video that conveys fire progress information, and capability to provide forensic evidence for fire investigations. This paper provides a chronological survey of different video-based smoke detection methods that are available in literatures from 1998 to 2014.Though the paper is not aimed at performing comparative analysis of the surveyed methods, perceived strengths and weakness of the different methods are identified as this will be useful for future research in video-based smoke or fire detection. Keywords: Early fire detection, video-based smoke detection, algorithms, computer vision, image processing

    Completed Local Structure Patterns on Three Orthogonal Planes for Dynamic Texture Recognition

    Get PDF
    International audienceDynamic texture (DT) is a challenging problem in computer vision because of the chaotic motion of textures. We address in this paper a new dynamic texture operator by considering local structure patterns (LSP) and completed local binary patterns (CLBP) for static images in three orthogonal planes to capture spatial-temporal texture structures. Since the typical operator of local binary patterns (LBP), which uses center pixel for thresholding, has some limitations such as sensitivity to noise and near uniform regions, the proposed approach can deal with these drawbacks by using global and local texture information for adaptive thresholding and CLBP for exploiting complementary texture information in three orthogonal planes. Evaluations on different datasets of dynamic textures (UCLA, DynTex, DynTex++) show that our proposal significantly outper-forms recent results in the state-of-the-art approaches

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001

    Directional Dense-Trajectory-based Patterns for Dynamic Texture Recognition

    Get PDF
    International audienceRepresentation of dynamic textures (DTs), well-known as a sequence of moving textures, is a challenging problem in video analysis due to disorientation of motion features. Analyzing DTs to make them "under-standable" plays an important role in different applications of computer vision. In this paper, an efficient approach for DT description is proposed by addressing the following novel concepts. First, beneficial properties of dense trajectories are exploited for the first time to efficiently describe DTs instead of the whole video. Second, two substantial extensions of Local Vector Pattern operator are introduced to form a completed model which is based on complemented components to enhance its performance in encoding directional features of motion points in a trajectory. Finally, we present a new framework, called Directional Dense Trajectory Patterns , which takes advantage of directional beams of dense trajectories along with spatio-temporal features of their motion points in order to construct dense-trajectory-based descriptors with more robustness. Evaluations of DT recognition on different benchmark datasets (i.e., UCLA, DynTex, and DynTex++) have verified the interest of our proposal

    Reconhecimento de ações em vídeos baseado na fusão de representações de ritmos visuais

    Get PDF
    Orientadores: Hélio Pedrini, David Menotti GomesTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Avanços nas tecnologias de captura e armazenamento de vídeos têm promovido uma grande demanda pelo reconhecimento automático de ações. O uso de câmeras para propó- sitos de segurança e vigilância tem aplicações em vários cenários, tais coomo aeroportos, parques, bancos, estações, estradas, hospitais, supermercados, indústrias, estádios, escolas. Uma dificuldade inerente ao problema é a complexidade da cena sob condições habituais de gravação, podendo conter fundo complexo e com movimento, múltiplas pes- soas na cena, interações com outros atores ou objetos e movimentos de câmera. Bases de dados mais recentes são construídas principalmente com gravações compartilhadas no YouTube e com trechos de filmes, situações em que não se restringem esses obstáculos. Outra dificuldade é o impacto da dimensão temporal, pois ela infla o tamanho dos da- dos, aumentando o custo computacional e o espaço de armazenamento. Neste trabalho, apresentamos uma metodologia de descrição de volumes utilizando a representação de Ritmos Visuais (VR). Esta técnica remodela o volume original do vídeo em uma imagem, em que se computam descritores bidimensionais. Investigamos diferentes estratégias para construção do ritmo visual, combinando configurações em diversos domínios de imagem e direções de varredura dos quadros. A partir disso, propomos dois métodos de extração de características originais, denominados Naïve Visual Rhythm (Naïve VR) e Visual Rhythm Trajectory Descriptor (VRTD). A primeira abordagem é a aplicação direta da técnica no volume de vídeo original, formando um descritor holístico que considera os eventos da ação como padrões e formatos na imagem de ritmo visual. A segunda variação foca na análise de pequenas vizinhanças obtidas a partir do processo das trajetórias densas, que permite que o algoritmo capture detalhes despercebidos pela descrição global. Testamos a nossa proposta em oito bases de dados públicas, sendo uma de gestos (SKIG), duas em primeira pessoa (DogCentric e JPL), e cinco em terceira pessoa (Weizmann, KTH, MuHAVi, UCF11 e HMDB51). Os resultados mostram que a técnica empregada é capaz de extrair elementos de movimento juntamente com informações de formato e de aparência, obtendo taxas de acurácia competitivas comparadas com o estado da arteAbstract: Advances in video acquisition and storage technologies have promoted a great demand for automatic recognition of actions. The use of cameras for security and surveillance purposes has applications in several scenarios, such as airports, parks, banks, stations, roads, hospitals, supermarkets, industries, stadiums, schools. An inherent difficulty of the problem is the complexity of the scene under usual recording conditions, which may contain complex background and motion, multiple people on the scene, interactions with other actors or objects, and camera motion. Most recent databases are built primarily with shared recordings on YouTube and with snippets of movies, situations where these obstacles are not restricted. Another difficulty is the impact of the temporal dimension since it expands the size of the data, increasing computational cost and storage space. In this work, we present a methodology of volume description using the Visual Rhythm (VR) representation. This technique reshapes the original volume of the video into an image, where two-dimensional descriptors are computed. We investigated different strategies for constructing the representation by combining configurations in several image domains and traversing directions of the video frames. From this, we propose two feature extraction methods, Naïve Visual Rhythm (Naïve VR) and Visual Rhythm Trajectory Descriptor (VRTD). The first approach is the straightforward application of the technique in the original video volume, forming a holistic descriptor that considers action events as patterns and formats in the visual rhythm image. The second variation focuses on the analysis of small neighborhoods obtained from the process of dense trajectories, which allows the algorithm to capture details unnoticed by the global description. We tested our methods in eight public databases, one of hand gestures (SKIG), two in first person (DogCentric and JPL), and five in third person (Weizmann, KTH, MuHAVi, UCF11 and HMDB51). The results show that the developed techniques are able to extract motion elements along with format and appearance information, achieving competitive accuracy rates compared to state-of-the-art action recognition approachesDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2015/03156-7FAPES
    corecore