2,718 research outputs found
Face recognition technologies for evidential evaluation of video traces
Human recognition from video traces is an important task in forensic investigations and evidence evaluations. Compared with other biometric traits, face is one of the most popularly used modalities for human recognition due to the fact that its collection is non-intrusive and requires less cooperation from the subjects. Moreover, face images taken at a long distance can still provide reasonable resolution, while most biometric modalities, such as iris and fingerprint, do not have this merit. In this chapter, we discuss automatic face recognition technologies for evidential evaluations of video traces. We first introduce the general concepts in both forensic and automatic face recognition , then analyse the difficulties in face recognition from videos . We summarise and categorise the approaches for handling different uncontrollable factors in difficult recognition conditions. Finally we discuss some challenges and trends in face recognition research in both forensics and biometrics . Given its merits tested in many deployed systems and great potential in other emerging applications, considerable research and development efforts are expected to be devoted in face recognition in the near future
Algorithms for people re-identification from RGB-D videos exploiting skeletal information
In this thesis a novel methodology to face people re-identification problem is proposed. Re-identification is a complex research topic representing a fundamental issue especially for intelligent video surveillance applications. Its goal is to determine the occurrences of the same person in different video sequences or images, usually by choosing from a high number of candidates within a datasetope
Learning Human Poses from Monocular Images
In this research, we mainly focus on the problem of estimating the 2D human pose from a monocular image and reconstructing the 3D human pose based on the 2D human pose. Here a 3D pose is the locations of the human joints in the 3D space and a 2D pose is the projection of a 3D pose on an image. Unlike many previous works that explicitly use hand-crafted physiological models, both our 2D pose estimation and 3D pose reconstruction approaches implicitly learn the structure of human body from human pose data.
This 3D pose reconstruction is an ill-posed problem without considering any prior knowledge. In this research, we propose a new approach, namely Pose Locality Constrained Representation (PLCR), to constrain the search space for the underlying 3D human pose and use it to improve 3D human pose reconstruction. In this approach, an over-complete pose dictionary is constructed by hierarchically clustering the 3D pose space into many subspaces. Then PLCR utilizes the structure of the over-complete dictionary to constrain the 3D pose solution to a set of highly-related subspaces. Finally, PLCR is combined into the matching-pursuit based algorithm for 3D human-pose reconstruction.
The 2D human pose used in 3D pose reconstruction can be manually annotated or automatically estimated from a single image. In this research, we develop a new learning-based 2D human pose estimation approach based on a Dual-Source Deep Convolutional Neural Networks (DS-CNN). The proposed DS-CNN model learns the appearance of each local body part and the relations between parts simultaneously, while most of existing approaches consider them as two separate steps. In our experiments, the proposed DS-CNN model produces superior or comparable performance against the state-of-the-art 2D human-pose estimation approaches based on pose priors learned from hand-crafted models or holistic perspectives.
Finally, we use our 2D human pose estimation approach to recognize human attributes by utilizing the strong correspondence between human attributes and human body parts. Then we probe if and when the CNN can find such correspondence by itself on human attribute recognition and bird species recognition. We find that there is direct correlation between the recognition accuracy and the correctness of the correspondence that the CNN finds
Activity-conditioned continuous human pose estimation for performance analysis of athletes using the example of swimming
In this paper we consider the problem of human pose estimation in real-world
videos of swimmers. Swimming channels allow filming swimmers simultaneously
above and below the water surface with a single stationary camera. These
recordings can be used to quantitatively assess the athletes' performance. The
quantitative evaluation, so far, requires manual annotations of body parts in
each video frame. We therefore apply the concept of CNNs in order to
automatically infer the required pose information. Starting with an
off-the-shelf architecture, we develop extensions to leverage activity
information - in our case the swimming style of an athlete - and the continuous
nature of the video recordings. Our main contributions are threefold: (a) We
apply and evaluate a fine-tuned Convolutional Pose Machine architecture as a
baseline in our very challenging aquatic environment and discuss its error
modes, (b) we propose an extension to input swimming style information into the
fully convolutional architecture and (c) modify the architecture for continuous
pose estimation in videos. With these additions we achieve reliable pose
estimates with up to +16% more correct body joint detections compared to the
baseline architecture.Comment: 10 pages, 9 figures, accepted at WACV 201
- …