605 research outputs found
Covariate conscious approach for Gait recognition based upon Zernike moment invariants
Gait recognition i.e. identification of an individual from his/her walking
pattern is an emerging field. While existing gait recognition techniques
perform satisfactorily in normal walking conditions, there performance tend to
suffer drastically with variations in clothing and carrying conditions. In this
work, we propose a novel covariate cognizant framework to deal with the
presence of such covariates. We describe gait motion by forming a single 2D
spatio-temporal template from video sequence, called Average Energy Silhouette
image (AESI). Zernike moment invariants (ZMIs) are then computed to screen the
parts of AESI infected with covariates. Following this, features are extracted
from Spatial Distribution of Oriented Gradients (SDOGs) and novel Mean of
Directional Pixels (MDPs) methods. The obtained features are fused together to
form the final well-endowed feature set. Experimental evaluation of the
proposed framework on three publicly available datasets i.e. CASIA dataset B,
OU-ISIR Treadmill dataset B and USF Human-ID challenge dataset with recently
published gait recognition approaches, prove its superior performance.Comment: 11 page
Improved Behavior Monitoring and Classification Using Cues Parameters Extraction from Camera Array Images
Behavior monitoring and classification is a mechanism used to automatically identify or verify individual based on their human detection, tracking and behavior recognition from video sequences captured by a depth camera. In this paper, we designed a system that precisely classifies the nature of 3D body postures obtained by Kinect using an advanced recognizer. We proposed novel features that are suitable for depth data. These features are robust to noise, invariant to translation and scaling, and capable of monitoring fast human bodyparts movements. Lastly, advanced hidden Markov model is used to recognize different activities. In the extensive experiments, we have seen that our system consistently outperforms over three depth-based behavior datasets, i.e., IM-DailyDepthActivity, MSRDailyActivity3D and MSRAction3D in both posture classification and behavior recognition. Moreover, our system handles subject's body parts rotation, self-occlusion and body parts missing which significantly track complex activities and improve recognition rate. Due to easy accessible, low-cost and friendly deployment process of depth camera, the proposed system can be applied over various consumer-applications including patient-monitoring system, automatic video surveillance, smart homes/offices and 3D games
Review of Person Re-identification Techniques
Person re-identification across different surveillance cameras with disjoint
fields of view has become one of the most interesting and challenging subjects
in the area of intelligent video surveillance. Although several methods have
been developed and proposed, certain limitations and unresolved issues remain.
In all of the existing re-identification approaches, feature vectors are
extracted from segmented still images or video frames. Different similarity or
dissimilarity measures have been applied to these vectors. Some methods have
used simple constant metrics, whereas others have utilised models to obtain
optimised metrics. Some have created models based on local colour or texture
information, and others have built models based on the gait of people. In
general, the main objective of all these approaches is to achieve a
higher-accuracy rate and lowercomputational costs. This study summarises
several developments in recent literature and discusses the various available
methods used in person re-identification. Specifically, their advantages and
disadvantages are mentioned and compared.Comment: Published 201
A spatiotemporal deep learning approach for automatic pathological Gait classification
Human motion analysis provides useful information for the diagnosis and recovery assessment of people suffering from pathologies, such as those affecting the way of walking, i.e., gait. With recent developments in deep learning, state-of-the-art performance can now be achieved using a single 2D-RGB-camera-based gait analysis system, offering an objective assessment of gait-related pathologies. Such systems provide a valuable complement/alternative to the current standard practice of subjective assessment. Most 2D-RGB-camera-based gait analysis approaches rely on compact gait representations, such as the gait energy image, which summarize the characteristics of a walking sequence into one single image. However, such compact representations do not
fully capture the temporal information and dependencies between successive gait movements. This limitation is addressed by proposing a spatiotemporal deep learning approach that uses a selection of key frames to represent a gait cycle. Convolutional and recurrent deep neural networks were combined, processing each gait cycle as a collection of silhouette key frames, allowing the system to learn temporal patterns among the spatial features extracted at individual time instants. Trained with gait sequences from the GAIT-IT dataset, the proposed system is able to improve gait pathology classification accuracy, outperforming state-of-the-art solutions and achieving improved generalization on cross-dataset tests.info:eu-repo/semantics/publishedVersio
Carried baggage detection and recognition in video surveillance with foreground segmentation
Security cameras installed in public spaces or in private organizations continuously
record video data with the aim of detecting and preventing crime. For that reason,
video content analysis applications, either for real time (i.e. analytic) or post-event
(i.e. forensic) analysis, have gained high interest in recent years. In this thesis,
the primary focus is on two key aspects of video analysis, reliable moving object
segmentation and carried object detection & identification.
A novel moving object segmentation scheme by background subtraction is presented
in this thesis. The scheme relies on background modelling which is based
on multi-directional gradient and phase congruency. As a post processing step,
the detected foreground contours are refined by classifying the edge segments as
either belonging to the foreground or background. Further contour completion
technique by anisotropic diffusion is first introduced in this area. The proposed
method targets cast shadow removal, gradual illumination change invariance, and
closed contour extraction.
A state of the art carried object detection method is employed as a benchmark
algorithm. This method includes silhouette analysis by comparing human temporal
templates with unencumbered human models. The implementation aspects of
the algorithm are improved by automatically estimating the viewing direction of
the pedestrian and are extended by a carried luggage identification module. As
the temporal template is a frequency template and the information that it provides
is not sufficient, a colour temporal template is introduced. The standard
steps followed by the state of the art algorithm are approached from a different
extended (by colour information) perspective, resulting in more accurate carried
object segmentation.
The experiments conducted in this research show that the proposed closed
foreground segmentation technique attains all the aforementioned goals. The incremental
improvements applied to the state of the art carried object detection
algorithm revealed the full potential of the scheme. The experiments demonstrate
the ability of the proposed carried object detection algorithm to supersede the
state of the art method
Human identification from video using advanced gait recognition techniques
The solutions proposed in this thesis contribute to improve gait recognition performance in practical scenarios that further enable the adoption of gait recognition into real world security and forensic applications that require identifying humans at a distance. Pioneering work has been conducted on frontal gait recognition using depth images to allow gait to be integrated with biometric walkthrough portals. The effects of gait challenging conditions including clothing, carrying goods, and viewpoint have been explored. Enhanced approaches are proposed on segmentation, feature extraction, feature optimisation and classification elements, and state-of-the-art recognition performance has been achieved. A frontal depth gait database has been developed and made available to the research community for further investigation. Solutions are explored in 2D and 3D domains using multiple images sources, and both domain-specific and independent modality gait features are proposed
Person recognition based on deep gait: a survey.
Gait recognition, also known as walking pattern recognition, has expressed deep interest in the computer vision and biometrics community due to its potential to identify individuals from a distance. It has attracted increasing attention due to its potential applications and non-invasive nature. Since 2014, deep learning approaches have shown promising results in gait recognition by automatically extracting features. However, recognizing gait accurately is challenging due to the covariate factors, complexity and variability of environments, and human body representations. This paper provides a comprehensive overview of the advancements made in this field along with the challenges and limitations associated with deep learning methods. For that, it initially examines the various gait datasets used in the literature review and analyzes the performance of state-of-the-art techniques. After that, a taxonomy of deep learning methods is presented to characterize and organize the research landscape in this field. Furthermore, the taxonomy highlights the basic limitations of deep learning methods in the context of gait recognition. The paper is concluded by focusing on the present challenges and suggesting several research directions to improve the performance of gait recognition in the future
Learning Ground Traversability from Simulations
Mobile ground robots operating on unstructured terrain must predict which
areas of the environment they are able to pass in order to plan feasible paths.
We address traversability estimation as a heightmap classification problem: we
build a convolutional neural network that, given an image representing the
heightmap of a terrain patch, predicts whether the robot will be able to
traverse such patch from left to right. The classifier is trained for a
specific robot model (wheeled, tracked, legged, snake-like) using simulation
data on procedurally generated training terrains; the trained classifier can be
applied to unseen large heightmaps to yield oriented traversability maps, and
then plan traversable paths. We extensively evaluate the approach in simulation
on six real-world elevation datasets, and run a real-robot validation in one
indoor and one outdoor environment.Comment: Webpage: http://romarcg.xyz/traversability_estimation
- …