74,149 research outputs found

    On using gait to enhance frontal face extraction

    No full text
    Visual surveillance finds increasing deployment formonitoring urban environments. Operators need to be able to determine identity from surveillance images and often use face recognition for this purpose. In surveillance environments, it is necessary to handle pose variation of the human head, low frame rate, and low resolution input images. We describe the first use of gait to enable face acquisition and recognition, by analysis of 3-D head motion and gait trajectory, with super-resolution analysis. We use region- and distance-based refinement of head pose estimation. We develop a direct mapping to relate the 2-D image with a 3-D model. In gait trajectory analysis, we model the looming effect so as to obtain the correct face region. Based on head position and the gait trajectory, we can reconstruct high-quality frontal face images which are demonstrated to be suitable for face recognition. The contributions of this research include the construction of a 3-D model for pose estimation from planar imagery and the first use of gait information to enhance the face extraction process allowing for deployment in surveillance scenario

    Fine-Grained Head Pose Estimation Without Keypoints

    Full text link
    Estimating the head pose of a person is a crucial problem that has a large amount of applications such as aiding in gaze estimation, modeling attention, fitting 3D models to video and performing face alignment. Traditionally head pose is computed by estimating some keypoints from the target face and solving the 2D to 3D correspondence problem with a mean human head model. We argue that this is a fragile method because it relies entirely on landmark detection performance, the extraneous head model and an ad-hoc fitting step. We present an elegant and robust way to determine pose by training a multi-loss convolutional neural network on 300W-LP, a large synthetically expanded dataset, to predict intrinsic Euler angles (yaw, pitch and roll) directly from image intensities through joint binned pose classification and regression. We present empirical tests on common in-the-wild pose benchmark datasets which show state-of-the-art results. Additionally we test our method on a dataset usually used for pose estimation using depth and start to close the gap with state-of-the-art depth pose methods. We open-source our training and testing code as well as release our pre-trained models.Comment: Accepted to Computer Vision and Pattern Recognition Workshops (CVPRW), 2018 IEEE Conference on. IEEE, 201

    Web-Shaped Model for Head Pose Estimation: an Approach for Best Exemplar Selection

    Get PDF
    Head pose estimation is a sensitive topic in video surveillance/smart ambient scenarios since head rotations can hide/distort discriminative features of the face. Face recognition would often tackle the problem of video frames where subjects appear in poses making it quite impossible. In this respect, the selection of the frames with the best face orientation can allow triggering recognition only on these, therefore decreasing the possibility of errors. This paper proposes a novel approach to head pose estimation for smart cities and video surveillance scenarios, aiming at this goal. The method relies on a cascade of two models: the first one predicts the positions of 68 well-known face landmarks; the second one applies a web-shaped model over the detected landmarks, to associate each of them to a specific face sector. The method can work on detected faces at a reasonable distance and with a resolution that is supported by several present devices. Results of experiments executed over some classical pose estimation benchmarks, namely Point '04, Biwi, and AFLW datasets show good performance in terms of both pose estimation and computing time. Further results refer to noisy images that are typical of the addressed settings. Finally, examples demonstrate the selection of the best frames from videos captured in video surveillance conditions

    Computer Vision in Human Analysis: From Face and Body to Clothes

    Get PDF
    For decades, researchers of different areas, ranging from artificial intelligence to computer vision, have intensively investigated human-centered data, i.e., data in which the human plays a significant role, acquired through a non-invasive approach, such as cameras. This interest has been largely supported by the highly informative nature of this kind of data, which provides a variety of information with which it is possible to understand many aspects including, for instance, the human body or the outward appearance. Some of the main tasks related to human analysis are focused on the body (e.g., human pose estimation and anthropocentric measurement estimation), the hands (e.g., gesture detection and recognition), the head (e.g., head pose estimation), or the face (e.g., emotion and expression recognition). Additional tasks are based on non-corporal elements, such as motion (e.g., action recognition and human behavior understanding) and clothes (e.g., garment-based virtual try-on and attribute recognition). Unfortunately, privacy issues severely limit the usage and the diffusion of this kind of data, making the exploitation of learning approaches challenging. In particular, privacy issues behind the acquisition and the use of human-centered data must be addressed by public and private institutions and companies. Thirteen high-quality papers have been published in this Special Issue and are summarized in the following: four of them are focused on the human face (facial geometry, facial landmark detection, and emotion recognition), two on eye image analysis (eye status classification and 3D gaze estimation), five on the body (pose estimation, conversational gesture analysis, and action recognition), and two on the outward appearance (transferring clothing styles and fashion-oriented image captioning). These numbers confirm the high interest in human-centered data and, in particular, the variety of real-world applications that it is possible to develop

    Head Pose Estimation Based on Nonlinear Interpolative Mapping

    Get PDF
    [[abstract]]The performance of face recognition systems depends on conditions being consistent, including lighting, pose and facial expression. To solve the problem produced by pose variation it is suggested to pre-estimate the pose orientation of the given head image before it is recognized. In this paper, we propose a head pose estimation method that is an improvement on the one proposed by N. Hu et al. The proposed method trains in a supervised manner a nonlinear interpolative mapping function that maps input images to predicted pose angles. This mapping function is a linear combination of some Radial Basis Functions (RBF). The experimental results show that our proposed method has a better performance than the method proposed by Nan Hu et al. in terms of both time efficiency and estimation accuracy.[[sponsorship]]IEEE Taipei Section; National Science Council; Ministry of Education; Tamkang University; Asia University; Providence University; The University of Aizu; Lanzhou University[[conferencetype]]國際[[conferencetkucampus]]淡水校園[[conferencedate]]20091203~20091205[[booktype]]紙本[[booktype]]電子版[[iscallforpapers]]Y[[conferencelocation]]Taipei, Taiwa

    Efficient Human Facial Pose Estimation

    Get PDF
    Pose estimation has become an increasingly important area in computer vision and more specifically in human facial recognition and activity recognition for surveillance applications. Pose estimation is a process by which the rotation, pitch, or yaw of a human head is determined. Numerous methods already exist which can determine the angular change of a face, however, these methods vary in accuracy and their computational requirements tend to be too high for real-time applications. The objective of this thesis is to develop a method for pose estimation, which is computationally efficient, while still maintaining a reasonable degree of accuracy. In this thesis, a feature-based method is presented to determine the yaw angle of a human facial pose using a combination of artificial neural networks and template matching. The artificial neural networks are used for the feature detection portion of the algorithm along with skin detection and other image enhancement algorithms. The first head model, referred to as the Frontal Position Model, determines the pose of the face using two eyes and the mouth. The second model, referred to as the Side Position Model, is used when only one eye can be viewed and determines pose based on a single eye, the nose tip, and the mouth. The two models are presented to demonstrate the position change of facial features due to pose and to provide the means to determine the pose as these features change from the frontal position. The effectiveness of this pose estimation method is examined by looking at both the manual and automatic feature detection methods. Analysis is further performed on how errors in feature detection affect the resulting pose determination. The method resulted in the detection of facial pose from 30 to -30 degrees with an average error of 4.28 degrees for the Frontal Position Model and 5.79 degrees for the Side Position Model with correct feature detection. The Intel(R) Streaming SIMD Extensions (SSE) technology was employed to enhance the performance of floating point operations. The neural networks used in the feature detection process require a large amount of floating point calculations, due to the computation of the image data with weights and biases. With SSE optimization the algorithm becomes suitable for processing images in a real-time environment. The method is capable of determining features and estimating the pose at a rate of seven frames per second on a 1.8 GHz Pentium 4 computer

    Phenomenal competition for poses of the human head

    No full text
    Abstract. We show a cylindrical projection of the human head. This projection is ambiguous with respect to head pose. Viewing such a projection produces perceptual competition for a few discrete views. In a number of studies it is suggested that the brain may represent head pose in terms of a discrete set of preferred views. Exactly what these views are and how their representations enable visual face recognition and pose estimation is not entirely clear. On the one hand, it is easier to find neurons in the primate inferotemporal cortex that are more selective for head-on, profile, or back views than other angles (Perrett et al 1991). On the other hand, psychophysical studies have shown that human face recognition generalizes better from a learned view near 45° about the vertical axis than from other views (Bruce and Valentine 1987; Troje and BulthofF, in press). This latter observation is consistent with theoretical predictions based on virtual views for symmetric objects (Vetter et al 1993). In either case, one might expect that if an image of a human head is presented in such a way as to make pose assignment ambiguous, we might visually experience a competition for preferred poses
    corecore