4,954 research outputs found
Automatic landmark annotation and dense correspondence registration for 3D human facial images
Dense surface registration of three-dimensional (3D) human facial images
holds great potential for studies of human trait diversity, disease genetics,
and forensics. Non-rigid registration is particularly useful for establishing
dense anatomical correspondences between faces. Here we describe a novel
non-rigid registration method for fully automatic 3D facial image mapping. This
method comprises two steps: first, seventeen facial landmarks are automatically
annotated, mainly via PCA-based feature recognition following 3D-to-2D data
transformation. Second, an efficient thin-plate spline (TPS) protocol is used
to establish the dense anatomical correspondence between facial images, under
the guidance of the predefined landmarks. We demonstrate that this method is
robust and highly accurate, even for different ethnicities. The average face is
calculated for individuals of Han Chinese and Uyghur origins. While fully
automatic and computationally efficient, this method enables high-throughput
analysis of human facial feature variation.Comment: 33 pages, 6 figures, 1 tabl
Recommended from our members
Orientation Information in Encoding Facial Expressions for People With Central Vision Loss.
Purpose:Patients with central vision loss face daily challenges on performing various visual tasks. Categorizing facial expressions is one of the essential daily activities. The knowledge of what visual information is crucial for facial expression categorization is important to the understanding of the functional performance of these patients. Here we asked how the performance for categorizing facial expressions depends on the spatial information along different orientations for patients with central vision loss. Methods:Eight observers with central vision loss and five age-matched normally sighted observers categorized face images into four expressions: angry, fearful, happy, and sad. An orientation filter (bandwidth = 23°) was applied to restrict the spatial information within the face images, with the center of the filter ranged from horizontal (0°) to 150° in steps of 30°. Face images without filtering were also tested. Results:When the stimulus visibility was matched, observers with central vision loss categorized facial expressions just as well as their normally sighted counterparts, and showed similar confusion and bias patterns. For all four expressions, performance (normalized d'), uncorrelated with any of the observers' visual characteristics, peaked between -30° and 30° filter orientations and declined systematically as the filter orientation approached vertical (90°). Like normally sighted observers, observers with central vision loss also relied mainly on mouth and eye regions to categorize facial expressions. Conclusions:Similar to people with normal vision, people with central vision loss rely primarily on the spatial information around the horizontal orientation, in particular the regions around the mouth and eyes, for recognizing facial expressions
- …