202 research outputs found
Towards automated visual surveillance using gait for identity recognition and tracking across multiple non-intersecting cameras
Despite the fact that personal privacy has become a major concern, surveillance technology is now becoming ubiquitous in modern society. This is mainly due to the increasing number of crimes as well as the essential necessity to provide secure and safer environment. Recent research studies have confirmed now the possibility of recognizing people by the way they walk i.e. gait. The aim of this research study is to investigate the use of gait for people detection as well as identification across different cameras. We present a new approach for people tracking and identification between different non-intersecting un-calibrated stationary cameras based on gait analysis. A vision-based markerless extraction method is being deployed for the derivation of gait kinematics as well as anthropometric measurements in order to produce a gait signature. The novelty of our approach is motivated by the recent research in biometrics and forensic analysis using gait. The experimental results affirmed the robustness of our approach to successfully detect walking people as well as its potency to extract gait features for different camera viewpoints achieving an identity recognition rate of 73.6 % processed for 2270 video sequences. Furthermore, experimental results confirmed the potential of the proposed method for identity tracking in real surveillance systems to recognize walking individuals across different views with an average recognition rate of 92.5 % for cross-camera matching for two different non-overlapping views.<br/
Human shape modelling for carried object detection and segmentation
La détection des objets transportés est un des prérequis pour développer des systèmes qui cherchent à comprendre les activités impliquant des personnes et des objets. Cette thèse présente de nouvelles méthodes pour détecter et segmenter les objets transportés dans des vidéos de surveillance. Les contributions sont divisées en trois principaux chapitres. Dans le premier chapitre, nous introduisons notre détecteur d’objets transportés, qui nous permet de détecter un type générique d’objets. Nous formulons la détection d’objets transportés comme un problème de classification de contours. Nous classifions le contour des objets mobiles en deux classes : objets transportés et personnes. Un masque de probabilités est généré pour le contour d’une personne basé sur un ensemble d’exemplaires (ECE) de personnes qui marchent ou se tiennent debout de différents points de vue. Les contours qui ne correspondent pas au masque de probabilités généré sont considérés comme des candidats pour être des objets transportés. Ensuite, une région est assignée à chaque objet transporté en utilisant la Coupe Biaisée Normalisée (BNC) avec une probabilité obtenue par une fonction pondérée de son chevauchement avec l’hypothèse du masque de contours de la personne et du premier plan segmenté. Finalement, les objets transportés sont détectés en appliquant une Suppression des Non-Maxima (NMS) qui élimine les scores trop bas pour les objets candidats. Le deuxième chapitre de contribution présente une approche pour détecter des objets transportés avec une méthode innovatrice pour extraire des caractéristiques des régions d’avant-plan basée sur leurs contours locaux et l’information des super-pixels. Initiallement, un objet bougeant dans une séquence vidéo est segmente en super-pixels sous plusieurs échelles. Ensuite, les régions ressemblant à des personnes dans l’avant-plan sont identifiées en utilisant un ensemble de caractéristiques extraites de super-pixels dans un codebook de formes locales. Ici, les régions ressemblant à des humains sont équivalentes au masque de probabilités de la première méthode (ECE). Notre deuxième détecteur d’objets transportés bénéficie du nouveau descripteur de caractéristiques pour produire une carte de probabilité plus précise. Les compléments des super-pixels correspondants aux régions ressemblant à des personnes dans l’avant-plan sont considérés comme une carte de probabilité des objets transportés. Finalement, chaque groupe de super-pixels voisins avec une haute probabilité d’objets transportés et qui ont un fort support de bordure sont fusionnés pour former un objet transporté. Finalement, dans le troisième chapitre, nous présentons une méthode pour détecter et segmenter les objets transportés. La méthode proposée adopte le nouveau descripteur basé sur les super-pixels pour iii identifier les régions ressemblant à des objets transportés en utilisant la modélisation de la forme humaine. En utilisant l’information spatio-temporelle des régions candidates, la consistance des objets transportés récurrents, vus dans le temps, est obtenue et sert à détecter les objets transportés. Enfin, les régions d’objets transportés sont raffinées en intégrant de l’information sur leur apparence et leur position à travers le temps avec une extension spatio-temporelle de GrabCut. Cette étape finale sert à segmenter avec précision les objets transportés dans les séquences vidéo. Nos méthodes sont complètement automatiques, et font des suppositions minimales sur les personnes, les objets transportés, et les les séquences vidéo. Nous évaluons les méthodes décrites en utilisant deux ensembles de données, PETS 2006 et i-Lids AVSS. Nous évaluons notre détecteur et nos méthodes de segmentation en les comparant avec l’état de l’art. L’évaluation expérimentale sur les deux ensembles de données démontre que notre détecteur d’objets transportés et nos méthodes de segmentation surpassent de façon significative les algorithmes compétiteurs.Detecting carried objects is one of the requirements for developing systems that reason about activities involving people and objects. This thesis presents novel methods to detect and segment carried objects in surveillance videos. The contributions are divided into three main chapters. In the first, we introduce our carried object detector which allows to detect a generic class of objects. We formulate carried object detection in terms of a contour classification problem. We classify moving object contours into two classes: carried object and person. A probability mask for person’s contours is generated based on an ensemble of contour exemplars (ECE) of walking/standing humans in different viewing directions. Contours that are not falling in the generated hypothesis mask are considered as candidates for carried object contours. Then, a region is assigned to each carried object candidate contour using Biased Normalized Cut (BNC) with a probability obtained by a weighted function of its overlap with the person’s contour hypothesis mask and segmented foreground. Finally, carried objects are detected by applying a Non-Maximum Suppression (NMS) method which eliminates the low score carried object candidates. The second contribution presents an approach to detect carried objects with an innovative method for extracting features from foreground regions based on their local contours and superpixel information. Initially, a moving object in a video frame is segmented into multi-scale superpixels. Then human-like regions in the foreground area are identified by matching a set of extracted features from superpixels against a codebook of local shapes. Here the definition of human like regions is equivalent to a person’s probability map in our first proposed method (ECE). Our second carried object detector benefits from the novel feature descriptor to produce a more accurate probability map. Complement of the matching probabilities of superpixels to human-like regions in the foreground are considered as a carried object probability map. At the end, each group of neighboring superpixels with a high carried object probability which has strong edge support is merged to form a carried object. Finally, in the third contribution we present a method to detect and segment carried objects. The proposed method adopts the new superpixel-based descriptor to identify carried object-like candidate regions using human shape modeling. Using spatio-temporal information of the candidate regions, consistency of recurring carried object candidates viewed over time is obtained and serves to detect carried objects. Last, the detected carried object regions are refined by integrating information of their appearances and their locations over time with a spatio-temporal extension of GrabCut. This final stage is used to accurately segment carried objects in frames. Our methods are fully automatic, and make minimal assumptions about a person, carried objects and videos. We evaluate the aforementioned methods using two available datasets PETS 2006 and i-Lids AVSS. We compare our detector and segmentation methods against a state-of-the-art detector. Experimental evaluation on the two datasets demonstrates that both our carried object detection and segmentation methods significantly outperform competing algorithms
Covariate-invariant gait analysis for human identification(人識別を目的とする共変量不変歩行解析)
信州大学(Shinshu university)博士(工学)ThesisYEOH TZE WEI. Covariate-invariant gait analysis for human identification(人識別を目的とする共変量不変歩行解析). 信州大学, 2018, 博士論文. 博士(工学), 甲第692号, 平成30年03月20日授与.doctoral thesi
AFFECT-PRESERVING VISUAL PRIVACY PROTECTION
The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this dissertation, we propose to balance the privacy protection and the utility of the data by preserving the privacy-insensitive information, such as pose and expression, which is useful in many applications involving visual understanding.
The Intellectual Merits of the dissertation include a novel framework for visual privacy protection by manipulating facial image and body shape of individuals, which: (1) is able to conceal the identity of individuals; (2) provide a way to preserve the utility of the data, such as expression and pose information; (3) balance the utility of the data and capacity of the privacy protection.
The Broader Impacts of the dissertation focus on the significance of privacy protection on visual data, and the inadequacy of current privacy enhancing technologies in preserving affect and behavioral attributes of the visual content, which are highly useful for behavior observation in educational and medical settings. This work in this dissertation represents one of the first attempts in achieving both goals simultaneously
Carried baggage detection and recognition in video surveillance with foreground segmentation
Security cameras installed in public spaces or in private organizations continuously
record video data with the aim of detecting and preventing crime. For that reason,
video content analysis applications, either for real time (i.e. analytic) or post-event
(i.e. forensic) analysis, have gained high interest in recent years. In this thesis,
the primary focus is on two key aspects of video analysis, reliable moving object
segmentation and carried object detection & identification.
A novel moving object segmentation scheme by background subtraction is presented
in this thesis. The scheme relies on background modelling which is based
on multi-directional gradient and phase congruency. As a post processing step,
the detected foreground contours are refined by classifying the edge segments as
either belonging to the foreground or background. Further contour completion
technique by anisotropic diffusion is first introduced in this area. The proposed
method targets cast shadow removal, gradual illumination change invariance, and
closed contour extraction.
A state of the art carried object detection method is employed as a benchmark
algorithm. This method includes silhouette analysis by comparing human temporal
templates with unencumbered human models. The implementation aspects of
the algorithm are improved by automatically estimating the viewing direction of
the pedestrian and are extended by a carried luggage identification module. As
the temporal template is a frequency template and the information that it provides
is not sufficient, a colour temporal template is introduced. The standard
steps followed by the state of the art algorithm are approached from a different
extended (by colour information) perspective, resulting in more accurate carried
object segmentation.
The experiments conducted in this research show that the proposed closed
foreground segmentation technique attains all the aforementioned goals. The incremental
improvements applied to the state of the art carried object detection
algorithm revealed the full potential of the scheme. The experiments demonstrate
the ability of the proposed carried object detection algorithm to supersede the
state of the art method
Biometric walk recognizer. Research and results on wearable sensor-based gait recognition
Gait is a biometric trait that can allow user authentication, though being classified as a "soft" one due to a certain lack in permanence, and to sensibility to specific conditions. The earliest research relies on computer vision-based approaches, especially applied in video surveillance. More recently, the spread of wearable sensors, especially those embedded in mobile devices, which are able to capture the dynamics of the walking pattern through simpler 1D signals, has spurred a different research line. This capture modality can avoid some problems related to computer vision-based techniques, but suffers from specific limitations. Related research is still in a less advanced phase with respect to other biometric traits. However, the promising results achieved so far, the increasing accuracy of sensors, the ubiquitous presence of mobile devices, and the low cost of related techniques, make this biometrics attractive and suggest to continue the investigations in this field. The first Chapters of this thesis deal with an introduction to biometrics, and more specifically to gait trait. A comprehensive review of technologies, approaches and strategies exploited by gait recognition proposals in the state-of-the-art is also provided. After such introduction, the contributions of this work are presented in details. Summarizing, it improves preceding result achieved during my Master Degree in Computer Science course of Biometrics and extended in my following Master Degree Thesis. The research deals with different strategies, including preprocessing and recognition techniques, applied to the gait biometrics, in order to allow both an automatic recognition and an improvement of the system accuracy
Recommended from our members
An Investigation into the Relationship between Static and Dynamic Gait Features. A biometrics Perspective
Biometrics is a unique physical or behavioral characteristic of a person. This unique attribute, such as fingerprints or gait, can be used for identification or verification purposes. Gait is an emerging biometrics with great potential. Gait recognition is based on recognizing a person by the manner in which they walk. Its potential lays in that it can be captured at a distance and does not require the cooperation of the subject. This advantage makes it a very attractive tool for forensic cases and applications, where it can assist in identifying a suspect when other evidence such as DNA, fingerprints, or a face were not attainable. Gait can be used for recognition in a direct manner when the two samples are shot from similar camera resolution, position, and conditions. Yet in some cases, the only sample available is of an incomplete gait cycle, low resolution, low frame rate, a partially visible subject, or a single static image. Most of these conditions have one thing in common: static measurements. A gait signature is usually formed from a number of dynamic and static features. Static features are physical measurements of height, length, or build; while dynamic features are representations of joint rotations or trajectories.
The aim of this thesis is to study the potential of predicting dynamic features from static features. In this thesis, we have created a database that utilizes a 3D laser scanner for capturing accurate shape and volumes of a person, and a motion capture system to accurately record motion data. The first analysis focused on analyzing the correlation between twenty-one 2D static features and eight dynamic features. Eleven pairs of features were regarded as significant with the criterion of a P-value less than 0.05. Other features also showed a strong correlation that indicated the potential of their predictive power. The second analysis focused on 3D static and dynamic features. Through the correlation analysis, 1196 pairs of features were found to be significantly correlated. Based on these results, a linear regression analysis was used to predict a dynamic gait signature. The predictors chosen were based on two adaptive methods that were developed in this thesis: "the top-x" method and the "mixed method". The predictions were assessed for both for their accuracy and their classification potential that would be used for gait recognition. The top results produced a 59.21% mean matching percentile. This result will act as baseline for future research in predicting a dynamic gait signature from static features. The results of this thesis bare potential for applications in biomechanics, biometrics, forensics, and 3D animation
Artificial Intelligence for Data Analysis and Signal Processing
Artificial intelligence, or AI, currently encompasses a huge variety of fields, from areas such as logical reasoning and perception, to specific tasks such as game playing, language processing, theorem proving, and diagnosing diseases.
It is clear that systems with human-level intelligence (or even better) would have a huge impact on our everyday lives and on the future course of evolution, as it is already happening in many ways.
In this research AI techniques have been introduced and applied in several clinical and real world scenarios, with particular focus on deep learning methods.
A human gait identification system based on the analysis of inertial signals has been developed, leading to misclassification rates smaller than 0.15%.
Advanced deep learning architectures have been also investigated to tackle the problem of atrial fibrillation detection from short length and noisy electrocardiographic signals. The results show a clear improvement provided by representation learning over a knowledge-based approach.
Another important clinical challenge, both for the patient and on-board automatic alarm systems, is to detect with reasonable advance the patterns leading to risky situations, allowing the patient to take therapeutic decisions on the basis of future instead of current information. This problem has been specifically addressed for the prediction of critical hypo/hyperglycemic episodes from continuous glucose monitoring devices, carrying out a comparative analysis among the most successful methods for glucose event prediction.
This dissertation also shows evidence of the benefits of learning algorithms for vehicular traffic anomaly detection, through the use of a statistical Bayesian framework, and for the optimization of video streaming user experience, implementing an intelligent adaptation engine for video streaming clients.
The proposed solution explores the promising field of deep learning methods integrated with reinforcement learning schema, showing its benefits against other state of the art approaches.
The great knowledge transfer capability of artificial intelligence methods and the benefits of representation learning systems stand out from this research, representing the common thread among all the presented research fields
- …