653,580 research outputs found

    3D Human Body Model Acquisition from Multiple Views

    Get PDF
    We present a novel motion-based approach for the part determination and shape estimation of a human’s body parts. The novelty of the technique is that neither a prior model of the human body is employed nor prior body part segmentation is assumed. We present a Human Body Part Identification Strategy (HBPIS) that recovers all the body parts of a moving human based on the spatiotemporal analysis of its deforming silhouette. We formalize the process of simultaneous part determination, and 2D shape estimation by employing the Supervisory Control Theory of Discrete Event Systems. In addition, in order to acquire the 3D shape of the body parts, we present a new algorithm which selectively integrates the (segmented by the HBPIS) apparent contours, from three mutually orthogonal views. The effectiveness of the approach is demonstrated through a series of experiments, where a subject performs a set of movements according to a protocol that reveals the structure of the human body

    Recurrent Attention Models for Depth-Based Person Identification

    Get PDF
    We present an attention-based model that reasons on human body shape and motion dynamics to identify individuals in the absence of RGB information, hence in the dark. Our approach leverages unique 4D spatio-temporal signatures to address the identification problem across days. Formulated as a reinforcement learning task, our model is based on a combination of convolutional and recurrent neural networks with the goal of identifying small, discriminative regions indicative of human identity. We demonstrate that our model produces state-of-the-art results on several published datasets given only depth images. We further study the robustness of our model towards viewpoint, appearance, and volumetric changes. Finally, we share insights gleaned from interpretable 2D, 3D, and 4D visualizations of our model's spatio-temporal attention.Comment: Computer Vision and Pattern Recognition (CVPR) 201

    An investigation of matching symmetry in the human pinnae with possible implications for 3D ear recognition and sound localization

    Get PDF
    The human external ears, or pinnae, have an intriguing shape and, like most parts of the human external body, bilateral symmetry is observed between left and right. It is a well-known part of our auditory sensory system and mediates the spatial localization of incoming sounds in 3D from monaural cues due to its shape-specific filtering as well as binaural cues due to the paired bilateral locations of the left and right ears. Another less broadly appreciated aspect of the human pinna shape is its uniqueness from one individual to another, which is on the level of what is seen in fingerprints and facial features. This makes pinnae very useful in human identification, which is of great interest in biometrics and forensics. Anatomically, the type of symmetry observed is known as matching symmetry, with structures present as separate mirror copies on both sides of the body, and in this work we report the first such investigation of the human pinna in 3D. Within the framework of geometric morphometrics, we started by partitioning ear shape, represented in a spatially dense way, into patterns of symmetry and asymmetry, following a two-factor anova design. Matching symmetry was measured in all substructures of the pinna anatomy. However, substructures that stick out' such as the helix, tragus, and lobule also contained a fair degree of asymmetry. In contrast, substructures such as the conchae, antitragus, and antihelix expressed relatively stronger degrees of symmetric variation in relation to their levels of asymmetry. Insights gained from this study were injected into an accompanying identification setup exploiting matching symmetry where improved performance is demonstrated. Finally, possible implications of the results in the context of ear recognition as well as sound localization are discussed

    Exploring Shape Embedding for Cloth-Changing Person Re-Identification via 2D-3D Correspondences

    Full text link
    Cloth-Changing Person Re-Identification (CC-ReID) is a common and realistic problem since fashion constantly changes over time and people's aesthetic preferences are not set in stone. While most existing cloth-changing ReID methods focus on learning cloth-agnostic identity representations from coarse semantic cues (e.g. silhouettes and part segmentation maps), they neglect the continuous shape distributions at the pixel level. In this paper, we propose Continuous Surface Correspondence Learning (CSCL), a new shape embedding paradigm for cloth-changing ReID. CSCL establishes continuous correspondences between a 2D image plane and a canonical 3D body surface via pixel-to-vertex classification, which naturally aligns a person image to the surface of a 3D human model and simultaneously obtains pixel-wise surface embeddings. We further extract fine-grained shape features from the learned surface embeddings and then integrate them with global RGB features via a carefully designed cross-modality fusion module. The shape embedding paradigm based on 2D-3D correspondences remarkably enhances the model's global understanding of human body shape. To promote the study of ReID under clothing change, we construct 3D Dense Persons (DP3D), which is the first large-scale cloth-changing ReID dataset that provides densely annotated 2D-3D correspondences and a precise 3D mesh for each person image, while containing diverse cloth-changing cases over all four seasons. Experiments on both cloth-changing and cloth-consistent ReID benchmarks validate the effectiveness of our method.Comment: Accepted by ACM MM 202

    Main characteristics and anthropometrics of people with down syndrome – Impact in garment design

    Get PDF
    Among the human chromosome abnormalities, Down Syndrome is the most prominent. Social perception challenges include prejudice, myth and exclusion, with social inclusion having been subject of several studies. From this perspective, the main objective of this study is to contribute to a higher social inclusion of people with Down Syndrome. This is addressed by an anthropometric characterization study of Down Syndrome individuals, performed with the technology of body scanning (3D Body Scanner). The presented study can support the development of inclusive clothing, adapted to people with special needs, promoting the anthropometric and ergonomic aspects of shape, comfort and aesthetics, which would lead to an increased quality of life, self-esteem and security, contributing to a higher inclusion in our society. The results from the data obtained through the measuring tables provided by the 3D Body Scanner System allow the identification of the main body shapes of the analyzed sample, as well as the main variables of their measurements. The impact characteristics from this specific population in the garment design process is also discussed.(UID/CTM/000264)info:eu-repo/semantics/publishedVersio

    Sitzen und Gehen. Zur Hermeneutik des Leibes in den fernöstlichen Künsten

    Get PDF
    This article criticizes the classical paradigm of philosophical anthropology, which not only tried a scientifically based determination of human identity, but it also contained practical postulates and an implicit teleology. In my paper I argue that the anthropology in our liquid post-modernity should not idealize its status as “first philosophy”. Rather it should remain descriptive and critical. It is therefore urged a transition from a comparative theory of human identity – based on the animal or computer comparison – to a non-idealistic theory of identity. In this respect, the paper offers a contribution to an anthropology of Far Eastern Arts. It focuses mainly on two phenomena: the seated body in Zen Buddhism and the moving body in Taoism. These arts, I argue, have an educational value because they make discover a bodily and mental self-reference, which also means a reference to the other, a self-withdrawal. Through such an identification with the body, an awareness-based identity takes shape; an identity which is broader than Descartes’ concept of consciousness

    Point2PartVolume: Human body volume estimation from a single depth image

    Get PDF
    Human body volume is a useful biometric feature for human identification and an important medical indicator for monitoring body health. Traditional body volume estimation techniques such as underwater weighing and air displacement demand a lot of equipment, and are difficult to be performed under some circumstances, e.g. in clinical environments when dealing with bedridden patients. In this contribution, a novel vision-based method dubbed Point2PartVolume based on deep learning is proposed to rapidly and accurately predict the part-aware body volumes from a single depth image of the dressed body. Firstly, a novel multi-task neural network is proposed for jointly completing the partial body point clouds, predicting the body shape under clothing, and semantically segmenting the reconstructed body into parts. Next, the estimated body segments are fed into the proposed volume regression network to estimate the partial volumes. A simple yet efficient two-step training strategy is proposed for improving the accuracy of volume prediction regressed from point clouds. Compared to existing methods, the proposed method addresses several major challenges in vision-based human body volume estimation, including shape completion, pose estimation, body shape estimation under clothing, body segmentation, and volume regression from point clouds. Experimental results on both the synthetic data and public real-world data show our method achieved average 90% volume prediction accuracy and outperformed the relevant state-of-the-art

    Learning Clothing and Pose Invariant 3D Shape Representation for Long-Term Person Re-Identification

    Full text link
    Long-Term Person Re-Identification (LT-ReID) has become increasingly crucial in computer vision and biometrics. In this work, we aim to extend LT-ReID beyond pedestrian recognition to include a wider range of real-world human activities while still accounting for cloth-changing scenarios over large time gaps. This setting poses additional challenges due to the geometric misalignment and appearance ambiguity caused by the diversity of human pose and clothing. To address these challenges, we propose a new approach 3DInvarReID for (i) disentangling identity from non-identity components (pose, clothing shape, and texture) of 3D clothed humans, and (ii) reconstructing accurate 3D clothed body shapes and learning discriminative features of naked body shapes for person ReID in a joint manner. To better evaluate our study of LT-ReID, we collect a real-world dataset called CCDA, which contains a wide variety of human activities and clothing changes. Experimentally, we show the superior performance of our approach for person ReID.Comment: 10 pages, 7 figures, accepted by ICCV 202

    Effective Face Feature For Human Identification

    Get PDF
    Face image is one of the most important parts of human body. It is easily use for identification process. People naturally identify one another through face images. Due to increase rate of insecurity in our society, accurate machine based face recognition systems are needed to detect impersonators. Face recognition systems comprise of face detector module, preprocessing unit, feature extraction subsystem and classification stage. Robust feature extraction algorithm plays major role in determining the accuracy of intelligent systems that involves image processing analysis. In this paper, pose invariant feature is extracted from human faces. The proposed feature extraction method involves decomposition of captured face image into four sub-bands using Haar wavelet transform thereafter shape and texture features are extracted from approximation and detailed bands respectively. The pose invariant feature vector is computed by fusing the extracted features. Effectiveness of the feature vector in terms of intra-person variation and inter-persons variation was obtained from feature plot
    • …
    corecore