487 research outputs found

    From clothing to identity; manual and automatic soft biometrics

    No full text
    Soft biometrics have increasingly attracted research interest and are often considered as major cues for identity, especially in the absence of valid traditional biometrics, as in surveillance. In everyday life, several incidents and forensic scenarios highlight the usefulness and capability of identity information that can be deduced from clothing. Semantic clothing attributes have recently been introduced as a new form of soft biometrics. Although clothing traits can be naturally described and compared by humans for operable and successful use, it is desirable to exploit computer-vision to enrich clothing descriptions with more objective and discriminative information. This allows automatic extraction and semantic description and comparison of visually detectable clothing traits in a manner similar to recognition by eyewitness statements. This study proposes a novel set of soft clothing attributes, described using small groups of high-level semantic labels, and automatically extracted using computer-vision techniques. In this way we can explore the capability of human attributes vis-a-vis those which are inferred automatically by computer-vision. Categorical and comparative soft clothing traits are derived and used for identification/re identification either to supplement soft body traits or to be used alone. The automatically- and manually-derived soft clothing biometrics are employed in challenging invariant person retrieval. The experimental results highlight promising potential for use in various applications

    Soft Biometric Analysis: MultiPerson and RealTime Pedestrian Attribute Recognition in Crowded Urban Environments

    Get PDF
    Traditionally, recognition systems were only based on human hard biometrics. However, the ubiquitous CCTV cameras have raised the desire to analyze human biometrics from far distances, without people attendance in the acquisition process. Highresolution face closeshots are rarely available at far distances such that facebased systems cannot provide reliable results in surveillance applications. Human soft biometrics such as body and clothing attributes are believed to be more effective in analyzing human data collected by security cameras. This thesis contributes to the human soft biometric analysis in uncontrolled environments and mainly focuses on two tasks: Pedestrian Attribute Recognition (PAR) and person reidentification (reid). We first review the literature of both tasks and highlight the history of advancements, recent developments, and the existing benchmarks. PAR and person reid difficulties are due to significant distances between intraclass samples, which originate from variations in several factors such as body pose, illumination, background, occlusion, and data resolution. Recent stateoftheart approaches present endtoend models that can extract discriminative and comprehensive feature representations from people. The correlation between different regions of the body and dealing with limited learning data is also the objective of many recent works. Moreover, class imbalance and correlation between human attributes are specific challenges associated with the PAR problem. We collect a large surveillance dataset to train a novel gender recognition model suitable for uncontrolled environments. We propose a deep residual network that extracts several posewise patches from samples and obtains a comprehensive feature representation. In the next step, we develop a model for multiple attribute recognition at once. Considering the correlation between human semantic attributes and class imbalance, we respectively use a multitask model and a weighted loss function. We also propose a multiplication layer on top of the backbone features extraction layers to exclude the background features from the final representation of samples and draw the attention of the model to the foreground area. We address the problem of person reid by implicitly defining the receptive fields of deep learning classification frameworks. The receptive fields of deep learning models determine the most significant regions of the input data for providing correct decisions. Therefore, we synthesize a set of learning data in which the destructive regions (e.g., background) in each pair of instances are interchanged. A segmentation module determines destructive and useful regions in each sample, and the label of synthesized instances are inherited from the sample that shared the useful regions in the synthesized image. The synthesized learning data are then used in the learning phase and help the model rapidly learn that the identity and background regions are not correlated. Meanwhile, the proposed solution could be seen as a data augmentation approach that fully preserves the label information and is compatible with other data augmentation techniques. When reid methods are learned in scenarios where the target person appears with identical garments in the gallery, the visual appearance of clothes is given the most importance in the final feature representation. Clothbased representations are not reliable in the longterm reid settings as people may change their clothes. Therefore, developing solutions that ignore clothing cues and focus on identityrelevant features are in demand. We transform the original data such that the identityrelevant information of people (e.g., face and body shape) are removed, while the identityunrelated cues (i.e., color and texture of clothes) remain unchanged. A learned model on the synthesized dataset predicts the identityunrelated cues (shortterm features). Therefore, we train a second model coupled with the first model and learns the embeddings of the original data such that the similarity between the embeddings of the original and synthesized data is minimized. This way, the second model predicts based on the identityrelated (longterm) representation of people. To evaluate the performance of the proposed models, we use PAR and person reid datasets, namely BIODI, PETA, RAP, Market1501, MSMTV2, PRCC, LTCC, and MIT and compared our experimental results with stateoftheart methods in the field. In conclusion, the data collected from surveillance cameras have low resolution, such that the extraction of hard biometric features is not possible, and facebased approaches produce poor results. In contrast, soft biometrics are robust to variations in data quality. So, we propose approaches both for PAR and person reid to learn discriminative features from each instance and evaluate our proposed solutions on several publicly available benchmarks.This thesis was prepared at the University of Beria Interior, IT Instituto de TelecomunicaçÔes, Soft Computing and Image Analysis Laboratory (SOCIA Lab), Covilhã Delegation, and was submitted to the University of Beira Interior for defense in a public examination session

    Machine Learning Approaches to Human Body Shape Analysis

    Get PDF
    Soft biometrics, biomedical sciences, and many other fields of study pay particular attention to the study of the geometric description of the human body, and its variations. Although multiple contributions, the interest is particularly high given the non-rigid nature of the human body, capable of assuming different poses, and numerous shapes due to variable body composition. Unfortunately, a well-known costly requirement in data-driven machine learning, and particularly in the human-based analysis, is the availability of data, in the form of geometric information (body measurements) with related vision information (natural images, 3D mesh, etc.). We introduce a computer graphics framework able to generate thousands of synthetic human body meshes, representing a population of individuals with stratified information: gender, Body Fat Percentage (BFP), anthropometric measurements, and pose. This contribution permits an extensive analysis of different bodies in different poses, avoiding the demanding, and expensive acquisition process. We design a virtual environment able to take advantage of the generated bodies, to infer the body surface area (BSA) from a single view. The framework permits to simulate the acquisition process of newly introduced RGB-D devices disentangling different noise components (sensor noise, optical distortion, body part occlusions). Common geometric descriptors in soft biometric, as well as in biomedical sciences, are based on body measurements. Unfortunately, as we prove, these descriptors are not pose invariant, constraining the usability in controlled scenarios. We introduce a differential geometry approach assuming body pose variations as isometric transformations of the body surface, and body composition changes covariant to the body surface area. This setting permits the use of the Laplace-Beltrami operator on the 2D body manifold, describing the body with a compact, efficient, and pose invariant representation. We design a neural network architecture able to infer important body semantics from spectral descriptors, closing the gap between abstract spectral features, and traditional measurement-based indices. Studying the manifold of body shapes, we propose an innovative generative adversarial model able to learn the body shapes. The method permits to generate new bodies with unseen geometries as a walk on the latent space, constituting a significant advantage over traditional generative methods

    Retrieving relative soft biometrics for semantic identification

    No full text
    Automatically describing pedestrians in surveillance footage is crucial to facilitate human accessible solutions for suspect identification. We aim to identify pedestrians based solely on human description, by automatically retrieving semantic attributes from surveillance images, alleviating exhaustive label annotation. This work unites a deep learning solution with relative soft biometric labels, to accurately retrieve more discriminative image attributes. We propose a Semantic Retrieval Convolutional Neural Network to investigate automatic retrieval of three soft biometric modalities, across a number of 'closed-world' and 'open-world' re-identification scenarios. Findings suggest that relative-continuous labels are more accurately predicted than absolute-binary and relative-binary labels, improving semantic identification in every scenario. Furthermore, we demonstrate a top rank-1 improvement of 23.2% and 26.3% over a traditional, baseline retrieval approach, in one-shot and multi-shot re-identification scenarios respectively

    Towards automated eyewitness descriptions: describing the face, body and clothing for recognition

    No full text
    A fusion approach to person recognition is presented here outlining the automated recognition of targets from human descriptions of face, body and clothing. Three novel results are highlighted. First, the present work stresses the value of comparative descriptions (he is taller than
) over categorical descriptions (he is tall). Second, it stresses the primacy of the face over body and clothing cues for recognition. Third, the present work unequivocally demonstrates the benefit gained through the combination of cues: recognition from face, body and clothing taken together far outstrips recognition from any of the cues in isolation. Moreover, recognition from body and clothing taken together nearly equals the recognition possible from the face alone. These results are discussed with reference to the intelligent fusion of information within police investigations. However, they also signal a potential new era in which automated descriptions could be provided without the need for human witnesses at all

    Gait Recognition

    Get PDF
    Gait recognition has received increasing attention as a remote biometric identification technology, i.e. it can achieve identification at the long distance that few other identification technologies can work. It shows enormous potential to apply in the field of criminal investigation, medical treatment, identity recognition, human‐computer interaction and so on. In this chapter, we introduce the state‐of‐the‐art gait recognition techniques, which include 3D‐based and 2D‐based methods, in the first part. And considering the advantages of 3D‐based methods, their related datasets are introduced as well as our gait database with both 2D silhouette images and 3D joints information in the second part. Given our gait dataset, a human walking model and the corresponding static and dynamic feature extraction are presented, which are verified to be view‐invariant, in the third part. And some gait‐based applications are introduced

    Gait Recognition: Databases, Representations, and Applications

    No full text
    There has been considerable progress in automatic recognition of people by the way they walk since its inception almost 20 years ago: there is now a plethora of technique and data which continue to show that a person’s walking is indeed unique. Gait recognition is a behavioural biometric which is available even at a distance from a camera when other biometrics may be occluded, obscured or suffering from insufficient image resolution (e.g. a blurred face image or a face image occluded by mask). Since gait recognition does not require subject cooperation due to its non-invasive capturing process, it is expected to be applied for criminal investigation from CCTV footages in public and private spaces. This article introduces current progress, a research background, and basic approaches for gait recognition in the first three sections, and two important aspects of gait recognition, the gait databases and gait feature representations are described in the following sections.Publicly available gait databases are essential for benchmarking individual approaches, and such databases should contain a sufficient number of subjects as well as covariate factors to realize statistically reliable performance evaluation and also robust gait recognition. Gait recognition researchers have therefore built such useful gait databases which incorporate subject diversities and/or rich covariate factors.Gait feature representation is also an important aspect for effective and efficient gait recognition. We describe the two main approaches to representation: model-free (appearance-based) approaches and model-based approaches. In particular, silhouette-based model-free approaches predominate in recent studies and many have been proposed and are described in detail.Performance evaluation results of such recent gait feature representations on two of the publicly available gait databases are reported: USF Human ID with rich covariate factors such as views, surface, bag, shoes, time elapse; and OU-ISIR LP with more than 4,000 subjects. Since gait recognition is suitable for criminal investigation applications of the gait recognition to forensics are addressed with real criminal cases in the application section. Finally, several open problems of the gait recognition are discussed to show future research avenues of the gait recognition

    Histogram of Oriented Principal Components for Cross-View Action Recognition

    Full text link
    Existing techniques for 3D action recognition are sensitive to viewpoint variations because they extract features from depth images which are viewpoint dependent. In contrast, we directly process pointclouds for cross-view action recognition from unknown and unseen views. We propose the Histogram of Oriented Principal Components (HOPC) descriptor that is robust to noise, viewpoint, scale and action speed variations. At a 3D point, HOPC is computed by projecting the three scaled eigenvectors of the pointcloud within its local spatio-temporal support volume onto the vertices of a regular dodecahedron. HOPC is also used for the detection of Spatio-Temporal Keypoints (STK) in 3D pointcloud sequences so that view-invariant STK descriptors (or Local HOPC descriptors) at these key locations only are used for action recognition. We also propose a global descriptor computed from the normalized spatio-temporal distribution of STKs in 4-D, which we refer to as STK-D. We have evaluated the performance of our proposed descriptors against nine existing techniques on two cross-view and three single-view human action recognition datasets. The Experimental results show that our techniques provide significant improvement over state-of-the-art methods

    Re-identification and semantic retrieval of pedestrians in video surveillance scenarios

    Get PDF
    Person re-identification consists of recognizing individuals across different sensors of a camera network. Whereas clothing appearance cues are widely used, other modalities could be exploited as additional information sources, like anthropometric measures and gait. In this work we investigate whether the re-identification accuracy of clothing appearance descriptors can be improved by fusing them with anthropometric measures extracted from depth data, using RGB-Dsensors, in unconstrained settings. We also propose a dissimilaritybased framework for building and fusing multi-modal descriptors of pedestrian images for re-identification tasks, as an alternative to the widely used score-level fusion. The experimental evaluation is carried out on two data sets including RGB-D data, one of which is a novel, publicly available data set that we acquired using Kinect sensors. In this dissertation we also consider a related task, named semantic retrieval of pedestrians in video surveillance scenarios, which consists of searching images of individuals using a textual description of clothing appearance as a query, given by a Boolean combination of predefined attributes. This can be useful in applications like forensic video analysis, where the query can be obtained froma eyewitness report. We propose a general method for implementing semantic retrieval as an extension of a given re-identification system that uses any multiple part-multiple component appearance descriptor. Additionally, we investigate on deep learning techniques to improve both the accuracy of attribute detectors and generalization capabilities. Finally, we experimentally evaluate our methods on several benchmark datasets originally built for re-identification task
    • 

    corecore