4,243 research outputs found

    Let’s Face It: The effect of orthognathic surgery on facial recognition algorithm analysis

    Get PDF
    Aim: To evaluate the ability of a publicly available facial recognition application program interface (API) to calculate similarity scores for pre- and post-surgical photographs of patients undergoing orthognathic surgeries. Our primary objective was to identify which surgical procedure(s) had the greatest effect(s) on similarity score. Methods: Standard treatment progress photographs for 25 retrospectively identified, orthodontic-orthognathic patients were analyzed using the API to calculate similarity scores between the pre- and post-surgical photographs. Photographs from two pre-surgical timepoints were compared as controls. Both relaxed and smiling photographs were included in the study to assess for the added impact of facial pose on similarity score. Surgical procedure(s) performed on each patient, gender, age at time of surgery, and ethnicity were recorded for statistical analysis. Nonparametric Kruskal-Wallis Rank Sum Tests were performed to univariately analyze the relationship between each categorical patient characteristic and each recognition score. Multiple comparison Wilcoxon Rank Sum Tests were performed on the subsequent statistically significant characteristics. P-Values were adjusted for using the Bonferroni correction technique. Results: Patients that had surgery on both jaws had a lower median similarity score, when comparing relaxed expressions before and after surgery, compared to those that had surgery only on the mandible (p = 0.014). It was also found that patients receiving LeFort and bilateral sagittal split osteotomies (BSSO) surgeries had a lower median similarity score compared to those that received only BSSO (p = 0.009). For the score comparing relaxed expressions before surgery versus smiling expressions after surgery, patients receiving two-jaw surgeries had lower scores than those that had surgery on only the mandible (p = 0.028). Patients that received LeFort and BSSO surgeries were also found to have lower similarity scores compared to patients that received only BSSO when comparing pre-surgical relaxed photographs to post-surgical smiling photographs (p = 0.036). Conclusions: Two-jaw surgeries were associated with a statistically significant decrease in similarity score when compared to one-jaw procedures. Pose was also found to be a factor influencing similarity scores, especially when comparing pre-surgical relaxed photographs to post-surgical smiling photographs

    EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Face Recognition

    Get PDF
    Automatic recognition of people faces many challenging problems which has experienced much attention due to many applications in different fields during recent years. Face recognition is one of those challenging problem which does not have much technique to solve all situations like pose, expression, and illumination changes, and/or ageing. Facial expression due to plastic surgery is one of the additional challenges which arise recently. This paper presents a new technique for accurate face recognition after the plastic surgery. This technique uses Entropy based SIFT (EV-SIFT) features for the recognition purpose. The corresponding feature extracts the key points and volume of the scale-space structure for which the information rate is determined. This provides least effect on uncertain variations in the face since the entropy is the higher order statistical feature. The corresponding EV-SIFT features are applied to the Support vector machine for classification. The normal SIFT feature extracts the key points based on the contrast of the image and the V- SIFT feature extracts the key points based on the volume of the structure. But the EV- SIFT method provides the contrast and volume information. This technique provides better performance when compare with PCA, normal SIFT and V-SIFT based feature extraction

    Pattern Recognition of Surgically Altered Face Images Using Multi-Objective Evolutionary Algorithm

    Get PDF
    Plastic surgery has been recently coming up with a new and important aspect of face recognition alongside pose, expression, illumination, aging and disguise. Plastic surgery procedures changes the texture, appearance and the shape of different facial regions. Therefore, it is difficult for conventional face recognition algorithms to match a post-surgery face image with a pre-surgery face image. The non-linear variations produced by plastic surgery procedures are hard to be addressed using current face recognition algorithms. The multi-objective evolutionary algorithm is a novel approach for pattern recognition of surgically altered face images. The algorithms starts with generating non-disjoint face granules and two feature extractors EUCLBP (Extended Uniform Circular Local Binary Pattern) and SIFT (Scale Invariant Feature Transform), are used to extract discriminating facial information from face granules. DOI: 10.17762/ijritcc2321-8169.150316

    Granular Approach for Recognizing Surgically Altered Face Images Using Keypoint Descriptors and Artificial Neural Network

    Get PDF
    This chapter presents a new technique called entropy volume-based scale-invariant feature transform for correct face recognition post cosmetic surgery. The comparable features taken are the key points and volume of the Difference of Gaussian (DOG) structure for those points the information rate is confirmed. The information extracted has a minimum effect on uncertain changes in the face since the entropy is the higher-order statistical feature. Then the extracted corresponding entropy volume-based scale-invariant feature transform features are applied and provided to the support vector machine for classification. The normal scale-invariant feature transform feature extracts the key points based on dissimilarity which is also known as the contrast of the image, and the volume-based scale-invariant feature transform (V-SIFT) feature extracts the key points based on the volume of the structure. However, the EV-SIFT method provides both the contrast and volume information. Thus, EV-SIFT provides better performance when compared with principal component analysis (PCA), normal scale-invariant feature transform (SIFT), and V-SIFT-based feature extraction. Since it is well known that the artificial neural network (ANN) with Levenberg-Marquardt (LM) is a powerful computation tool for accurate classification, it is further used in this technique for better classification results

    Recognizing Surgically Altered Face Images and 3D Facial Expression Recognition

    Get PDF
    AbstractAltering Facial appearances using surgical procedures are common now days. But it raised challenges for face recognition algorithms. Plastic surgery introduces non linear variations. Because of these variations it is difficult to be modeled by the existing face recognition system. Here presents a multi objective evolutionary granular algorithm. It operates on several granules extracted from a face images at multiple level of granularity. This granular information is unified in an evolutionary manner using multi objective genetic approach. Then identify the facial expression from the face images. For that 3D facial shapes are considering here. A novel automatic feature selection method is proposed based on maximizing the average relative entropy of marginalized class-conditional feature distributions and apply it to a complete pool of candidate features composed of normalized Euclidian distances between 83 facial feature points in the 3D space. A regularized multi-class AdaBoost classification algorithm is used here to get the highest average recognition rate

    Facial Asymmetry Analysis Based on 3-D Dynamic Scans

    Get PDF
    Facial dysfunction is a fundamental symptom which often relates to many neurological illnesses, such as stroke, Bell’s palsy, Parkinson’s disease, etc. The current methods for detecting and assessing facial dysfunctions mainly rely on the trained practitioners which have significant limitations as they are often subjective. This paper presents a computer-based methodology of facial asymmetry analysis which aims for automatically detecting facial dysfunctions. The method is based on dynamic 3-D scans of human faces. The preliminary evaluation results testing on facial sequences from Hi4D-ADSIP database suggest that the proposed method is able to assist in the quantification and diagnosis of facial dysfunctions for neurological patients

    Techniques for Ocular Biometric Recognition Under Non-ideal Conditions

    Get PDF
    The use of the ocular region as a biometric cue has gained considerable traction due to recent advances in automated iris recognition. However, a multitude of factors can negatively impact ocular recognition performance under unconstrained conditions (e.g., non-uniform illumination, occlusions, motion blur, image resolution, etc.). This dissertation develops techniques to perform iris and ocular recognition under challenging conditions. The first contribution is an image-level fusion scheme to improve iris recognition performance in low-resolution videos. Information fusion is facilitated by the use of Principal Components Transform (PCT), thereby requiring modest computational efforts. The proposed approach provides improved recognition accuracy when low-resolution iris images are compared against high-resolution iris images. The second contribution is a study demonstrating the effectiveness of the ocular region in improving face recognition under plastic surgery. A score-level fusion approach that combines information from the face and ocular regions is proposed. The proposed approach, unlike other previous methods in this application, is not learning-based, and has modest computational requirements while resulting in better recognition performance. The third contribution is a study on matching ocular regions extracted from RGB face images against that of near-infrared iris images. Face and iris images are typically acquired using sensors operating in visible and near-infrared wavelengths of light, respectively. To this end, a sparse representation approach which generates a joint dictionary from corresponding pairs of face and iris images is designed. The proposed joint dictionary approach is observed to outperform classical ocular recognition techniques. In summary, the techniques presented in this dissertation can be used to improve iris and ocular recognition in practical, unconstrained environments
    • …
    corecore