63,927 research outputs found

    Blind Image Quality Assessment for Face Pose Problem

    Get PDF
    No-Reference image quality assessment for face images is of high interest since it can be required for biometric systems such as biometric passport applications to increase system performance. This can be achieved by controlling the quality of biometric sample images during enrollment. This paper proposes a novel no-reference image quality assessment method that extracts several image features and uses data mining techniques for detecting the pose variation problem in facial images. Using subsets from three public 2D face databases PUT, ENSIB, and AR, the experimental results recorded a promising accuracy of 97.06% when using the RandomForest Classifier, which outperforms other classifier

    Facial Image Verification and Quality Assessment System -FaceIVQA

    Get PDF
    Although several techniques have been proposed for predicting biometric system performance using quality values, many of the research works were based on no-reference assessment technique using a single quality attribute measured directly from the data. These techniques have proved to be inappropriate for facial verification scenarios and inefficient because no single quality attribute can sufficient measure the quality of a facial image. In this research work, a facial image verification and quality assessment framework (FaceIVQA) was developed. Different algorithms and methods were implemented in FaceIVQA to extract the faceness, pose, illumination, contrast and similarity quality attributes using an objective full-reference image quality assessment approach. Structured image verification experiments were conducted on the surveillance camera (SCface) database to collect individual quality scores and algorithm matching scores from FaceIVQA using three recognition algorithms namely principal component analysis (PCA), linear discriminant analysis (LDA) and a commercial recognition SDK. FaceIVQA produced accurate and consistent facial image assessment data. The Result shows that it accurately assigns quality scores to probe image samples. The resulting quality score can be assigned to images captured for enrolment or recognition and can be used as an input to quality-driven biometric fusion systems.DOI:http://dx.doi.org/10.11591/ijece.v3i6.503

    ICface: Interpretable and Controllable Face Reenactment Using GANs

    Get PDF
    This paper presents a generic face animator that is able to control the pose and expressions of a given face image. The animation is driven by human interpretable control signals consisting of head pose angles and the Action Unit (AU) values. The control information can be obtained from multiple sources including external driving videos and manual controls. Due to the interpretable nature of the driving signal, one can easily mix the information between multiple sources (e.g. pose from one image and expression from another) and apply selective post-production editing. The proposed face animator is implemented as a two-stage neural network model that is learned in a self-supervised manner using a large video collection. The proposed Interpretable and Controllable face reenactment network (ICface) is compared to the state-of-the-art neural network-based face animation techniques in multiple tasks. The results indicate that ICface produces better visual quality while being more versatile than most of the comparison methods. The introduced model could provide a lightweight and easy to use tool for a multitude of advanced image and video editing tasks.Comment: Accepted in WACV-202

    The residual STL volume as a metric to evaluate accuracy and reproducibility of anatomic models for 3D printing: application in the validation of 3D-printable models of maxillofacial bone from reduced radiation dose CT images.

    Get PDF
    BackgroundThe effects of reduced radiation dose CT for the generation of maxillofacial bone STL models for 3D printing is currently unknown. Images of two full-face transplantation patients scanned with non-contrast 320-detector row CT were reconstructed at fractions of the acquisition radiation dose using noise simulation software and both filtered back-projection (FBP) and Adaptive Iterative Dose Reduction 3D (AIDR3D). The maxillofacial bone STL model segmented with thresholding from AIDR3D images at 100 % dose was considered the reference. For all other dose/reconstruction method combinations, a "residual STL volume" was calculated as the topologic subtraction of the STL model derived from that dataset from the reference and correlated to radiation dose.ResultsThe residual volume decreased with increasing radiation dose and was lower for AIDR3D compared to FBP reconstructions at all doses. As a fraction of the reference STL volume, the residual volume decreased from 2.9 % (20 % dose) to 1.4 % (50 % dose) in patient 1, and from 4.1 % to 1.9 %, respectively in patient 2 for AIDR3D reconstructions. For FBP reconstructions it decreased from 3.3 % (20 % dose) to 1.0 % (100 % dose) in patient 1, and from 5.5 % to 1.6 %, respectively in patient 2. Its morphology resembled a thin shell on the osseous surface with average thickness <0.1 mm.ConclusionThe residual volume, a topological difference metric of STL models of tissue depicted in DICOM images supports that reduction of CT dose by up to 80 % of the clinical acquisition in conjunction with iterative reconstruction yields maxillofacial bone models accurate for 3D printing
    • …
    corecore