409,919 research outputs found
EARLY FOREST FIRE DETECTION USING TEXTURE, BLOB THRESHOLD, AND MOTION ANALYSIS OF PRINCIPAL COMPONENTS
Forest fires constantly threaten ecological systems, infrastructure and human lives. The purpose behind this study is minimizing the devastating damage caused by forest fires. Since it is impossible to completely avoid their occurrences, it is essential to accomplish a fast and appropriate intervention to minimize their destructive consequences. The most traditional method for detecting forest fires is human based surveillance through lookout towers. However, this study presents a more modern technique. It utilizes land-based real-time multispectral video processing to identify and determine the possibility of fire occurring within the cameraâs field of view. The temporal, spectral, and spatial signatures of the fire are exploited. The methods discussed include: (1) Range filtering followed by entropy filtering of the infrared (IR) video data, and (2) Principal Component Analysis of visible spectrum video data followed by motion analysis and adaptive intensity threshold. The two schemes presented are tailored to detect the fire core, and the smoke plume, respectively.
Cooled Midwave Infrared (IR) camera is used to capture the heat distribution within the field of view. The fire core is then isolated using texture analysis techniques: first, range filtering applied on two consecutive IR frames, and then followed by entropy filtering of their absolute difference.
Since smoke represents the earliest sign of fire, this study also explores multiple techniques for detecting smoke plumes in a given scene. The spatial and temporal variance of smoke plume is captured using temporal Principal Component Analysis, PCA. The results show that a smoke plume is readily segmented via PCA applied on the visible Blue band over 2 seconds sampled every 0.2 seconds. The smoke plume exists in the 2nd principal component, and is finally identified, segmented, and isolated, using either motion analysis or adaptive intensity threshold.
Experimental results, obtained in this study, show that the proposed system can detect smoke effectively at a distance of approximately 832 meters with a low false-alarm rate and short reaction time. Applied, such system would achieve early forest fire detection minimizing fire damage.
Keywords: Image Processing, Principal Component Analysis, PCA, Principal Component, PC, Texture Analysis, Motion Analysis, Multispectral, Visible, Cooled Midwave Infrared, Smoke Signature, Gaussian Mixture Model
PERFORMANCE ANALYSIS OF THE MICROSOFT KINECT DEPTH SENSOR V2.0 FOR A REAL-TIME POSTURE DETECTION BY GESTURE CONFIDENCE LEVEL
A depth sensor is convenient for systematic and comprehensive data collection towards yielding real-time posture and gesture detection that possibly work in automated interventions. Recently, the direct measurement technique by utilizing a depth sensor equipped with a modelling software is the alternative tool to facilitate a real-time DHM (Digital Human Modelling). Microsoft Kinect appears as a low-cost motion sensing device which gathers the 3-D human motion data with real-time data and intervention features. However, the accurate real-time data collecting is necessary due to the object placement which respects to the sensor location in order helps and ensures decreased measurement errors and increased depth resolutions. This study aims to obtain the effective sensor setup according to the devicesâ performances by examining its variables (object-to-sensor distance, horizontal field of view (FOV), and light intensity) to reach the acceptable gesture confidence level using a Kinect SDK V2.0. The standing posture with a hand overhead and seating posture with lifted hand were selected as the investigated gestures. An ANOVA analysis was performed to determine if three variables and their interactions were significant factors in the Kinectâs ability to determine the placement set up to the target. The result showed that distance and horizontal FOV were statistically significant variables. Thus, it proposes to place the sensor within 2 or 3 m away from the investigated object and to limit the horizontal FOV to 0 or 10° based on standing posture and 1 or 2 m away from the investigated object and to limit the horizontal FOV to 0 or 20° according to the particular posture â seating posture. Eventually, this proposal could be set as the reference in setting up a direct measurement studio for acquiring the human body movement data
Recommended from our members
Image based human body rendering via regression & MRF energy minimization
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.A machine learning method for synthesising human images is explored to create new images without relying on 3D modelling. Machine learning allows the creation of new images through prediction from existing data based on the use of training images. In the present study, image synthesis is performed at two levels: contour and pixel. A class of learning-based methods is formulated to create object contours from the training image for the synthetic image that allow pixel synthesis within the contours in the second level. The methods rely on applying robust object descriptions, dynamic learning models after appropriate motion segmentation, and machine learning-based frameworks.
Image-based human image synthesis using machine learning is a research focus that has recently gained considerable attention in the field of computer graphics. It makes use of techniques from image/motion analysis in computer vision. The problem lies in the estimation of methods for image-based object configuration (i.e. segmentation, contour outline). Using the results of these analysis methods as bases, the research adopts the machine learning approach, in which human images are synthesised by executing the synthesis of contour and pixels through the learning from training image.
Firstly, thesis shows how an accurate silhouette is distilled using developed background subtraction for accuracy and efficiency. The traditional vector machine approach is used to avoid ambiguities within the regression process. Images can be represented as a class of accurate and efficient vectors for single images as well as sequences. Secondly, the framework is explored using a unique view of machine learning methods, i.e., support vector regression (SVR), to obtain the convergence result of vectors for contour allocation. The changing relationship between the synthetic image and the training image is expressed as a vector and represented in functions. Finally, a pixel synthesis is performed based on belief propagation.
This thesis proposes a novel image-based rendering method for colour image synthesis using SVR and belief propagation for generalisation to enable the prediction of contour and colour information from input colour images. The methods rely on using appropriately defined and robust input colour images, optimising the input contour images within a sparse SVR framework. Firstly, the thesis shows how contour can effectively and efficiently be predicted from small numbers of input contour images. In addition, the thesis exploits the sparse properties of SVR efficiency, and makes use of SVR to estimate regression function. The image-based rendering method employed in this study enables contour synthesis for the prediction of small numbers of input source images. This procedure avoids the use of complex models and geometry information. Secondly, the method used for human body contour colouring is extended to define eight differently connected pixels, and construct a link distance field via the belief propagation method. The link distance, which acts as the message in propagation, is transformed by improving the low-envelope method in fast distance transform. Finally, the methodology is tested by considering human facial and human body clothing information. The accuracy of the test results for the human body model confirms the efficiency of the proposed method
On the Representation of Human Motions and Distance-based Retargeting
International audienceDistance-based motion adaptation leads to the formulation of a dynamical Distance Geometry Problem (dynDGP) where the involved distances simultaneously represent the morphology of the animated character, as well as a possible motion. The explicit use of inter-joint distances allows us to easily verify the presence of joint contacts, which one generally wishes to preserve when adapting a given motion to characters having a different morphology. In this work, we focus our attention on suitable representations of human-like animated characters, and study the advantages (and disadvantages) in using some of them. In the initial works on distance-based motion adaptation, a 3n-dimensional vector was employed for representing the positions of the n joints of the character at a given frame. Here, we investigate the use of another, very popular in computer graphics, representation that basically replaces every joint position in the three-dimensional space with a set of three sorted Euler angles. We show that the latter can in fact be useful for avoiding some of the artifacts that were observed in previous computational experiments, but we argue that this Euler-angle representation, from a motion adaptation point of view, does not seem to be the optimal one. By paying particular attention to the degrees of freedom of the studied representations, it turns out that a novel character representation, inspired by representations used in structural biology for molecules, may allow us to reduce the character degrees of freedom to their minimal value. As a result, statistical analysis on human motion databases, where the motions are given with this new representation, can potentially provide important insights on human motions. This study is an initial step towards the identification of a full set of constraints capable of ensuring that unnatural postures for humans cannot be created while tackling motion adaptation problems
Fast human activity recognition based on structure and motion
This is the post-print version of the final paper published in Pattern Recognition Letters. The published article is available from the link below. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Copyright @ 2011 Elsevier B.V.We present a method for the recognition of human activities. The proposed approach is based on the construction of a set of templates for each activity as well as on the measurement of the motion in each activity. Templates are designed so that they capture the structural and motion information that is most discriminative among activities. The direct motion measurements capture the amount of translational motion in each activity. The two features are fused at the recognition stage. Recognition is achieved in two steps by calculating the similarity between the templates and motion features of the test and reference activities. The proposed methodology is experimentally assessed and is shown to yield excellent performance.European Commissio
Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras
Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/
Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets
We present an example-based approach to pose recovery, using histograms of oriented gradients as image descriptors. Tests on the HumanEva-I and HumanEva-II data sets provide us insight into the strengths and limitations of an example-based approach. We report mean relative 3D errors of approximately 65 mm per joint on HumanEva-I, and 175 mm on HumanEva-II. We discuss our results using single and multiple views. Also, we perform experiments to assess the algorithmâs generalization to unseen subjects, actions and viewpoints. We plan to incorporate the temporal aspect of human motion analysis to reduce orientation ambiguities, and increase the pose recovery accuracy
Covariate conscious approach for Gait recognition based upon Zernike moment invariants
Gait recognition i.e. identification of an individual from his/her walking
pattern is an emerging field. While existing gait recognition techniques
perform satisfactorily in normal walking conditions, there performance tend to
suffer drastically with variations in clothing and carrying conditions. In this
work, we propose a novel covariate cognizant framework to deal with the
presence of such covariates. We describe gait motion by forming a single 2D
spatio-temporal template from video sequence, called Average Energy Silhouette
image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the
parts of AESI infected with covariates. Following this, features are extracted
from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of
Directional Pixels (MDPs) methods. The obtained features are fused together to
form the final well-endowed feature set. Experimental evaluation of the
proposed framework on three publicly available datasets i.e. CASIA dataset B,
OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently
published gait recognition approaches, prove its superior performance.Comment: 11 page
On gait as a biometric: progress and prospects
There is increasing interest in automatic recognition by gait given its unique capability to recognize people at a distance when other biometrics are obscured. Application domains are those of any noninvasive biometric, but with particular advantage in surveillance scenarios. Its recognition capability is supported by studies in other domains such as medicine (biomechanics), mathematics and psychology which also suggest that gait is unique. Further, examples of recognition by gait can be found in literature, with early reference by Shakespeare concerning recognition by the way people walk. Many of the current approaches confirm the early results that suggested gait could be used for identification, and now on much larger databases. This has been especially influenced by DARPAâs Human ID at a Distance research program with its wide scenario of data and approaches. Gait has benefited from the developments in other biometrics and has led to new insight particularly in view of covariates. Equally, gait-recognition approaches concern extraction and description of moving articulated shapes and this has wider implications than just in biometrics
- âŠ