38 research outputs found

    Adaptive face modelling for reconstructing 3D face shapes from single 2D images

    Get PDF
    Example-based statistical face models using principle component analysis (PCA) have been widely deployed for three-dimensional (3D) face reconstruction and face recognition. The two common factors that are generally concerned with such models are the size of the training dataset and the selection of different examples in the training set. The representational power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. The RP of the model can be increased by correspondingly increasing the number of training samples. In this contribution, a novel approach is proposed to increase the RP of the 3D face reconstruction model by deforming a set of examples in the training dataset. A PCA-based 3D face model is adapted for each new near frontal input face image to reconstruct the 3D face shape. Further an extended Tikhonov regularisation method has been

    Reconstructing 3D face shapes from single 2D images using an adaptive deformation model

    Get PDF
    The Representational Power (RP) of an example-based model is its capability to depict a new 3D face for a given 2D face image. In this contribution, a novel approach is proposed to increase the RP of the 3D reconstruction PCA-based model by deforming a set of examples in the training dataset. By adding these deformed samples together with the original training samples we gain more RP. A 3D PCA-based model is adapted for each new input face image by deforming 3D faces in the training data set. This adapted model is used to reconstruct the 3D face shape for the given input 2D near frontal face image. Our experimental results justify that the proposed adaptive model considerably improves the RP of the conventional PCA-based model

    Enhancing Polyp Segmentation Generalizability by Minimizing Images' Total Variation

    No full text
    Polyps are considered a precursor of colon cancer and early detection of polyps may help decrease mortality rate. Several deep learning models have been proposed to address the problem, however, with limited generalizability due to the scarcity of the current public datasets. To tackle the issue, researchers typically use data augmentation techniques or generative models to inflate training images, independent of a downstream learning task. In this paper, we propose a deep learning framework to jointly train an image transformation model with a segmentation model where the output of the former is the input of the latter. During training, the image transformation model generates variations of the input image at every epoch, implicitly increasing the training data size for the segmentation model. On the other hand, we design a total variational denoising cost for the image transformation model, which effectively ensures that a transformation applied to an input image works towards the segmentation and not any other random effects which may hurt the segmentation goal. The experimental results with different settings demonstrate that the proposed framework has consistently shown an improvement of approximately 1% to 10% polyp IoU on unseen test images.</p

    Effect of facial feature points selection on 3D face shape reconstruction using regularization

    No full text
    This paper aims to test the regularized 3D face shape reconstruction algorithm to find out how the feature points selection affect the accuracy of the 3D face reconstruction based on the PCA-model. A case study on USF Human ID 3D database has been used to study these effect. We found that, if the test face is from the training set, then any set of any number greater than or equal to the number of training faces can reconstruct exact 3D face. If the test face does not belong to the training set, it will hardly reconstruct the exact 3D face using 3D PCA-based models. However, it could reconstruct an approximate face shape depending on the number of feature points and the weighting factor. Furthermore, the accuracy of reconstruction by a large number of feature points (> 150) is relatively the same in all cases even with different locations of points on the face. The regularized algorithm has also been tested to

    Quantitative analysis on PCA-based statistical 3D face shape modeling.

    No full text
    Principle Component Analysis (PCA)-based statistical 3D face modeling using example faces is a popular technique for modeling 3D faces and has been widely used for 3D face reconstruction and face recognition. The capability of the model to depict a new 3D face depends on the exemplar faces in the training set. Although a few 3D face databases are available to the research community and they have been used for 3D face modeling, there is little work done on rigorous statistical analysis of the models built from these databases. The common factors that are generally concerned are the size of the training set and the different choice of the examples in the training set. In this paper, a case study on USF Human ID 3D database, one of the most popular databases in the field, has been used to study the effect of these factors on the representational power. We found that: 1) the size of the training set increase, the

    Employing GRU to combine feature maps in DeeplabV3 for a better segmentation model

    No full text
    In this paper, we aim to enhance the segmentation capabilities of DeeplabV3 by employing Gated Recurrent Neural Network (GRU). A 1-by-1 convolution in DeeplabV3 was replaced by GRU after the Atrous Spatial Pyramid Pooling (ASSP) layer to combine the input feature maps. The convolution and GRU have sharable parameters, though, the latter has gates that enable/disable the contribution of each input feature map. The experiments on unseen test sets demonstrate that employing GRU instead of convolution would produce better segmentation results. The used datasets are public datasets provided by MedAI competition.In this paper we aim to enhance the segmentation capabilities of DeeplabV3 by employing Gated Recurrent Neural Network (GRU). A 1-by-1 convolution in DeeplabV3 were replaced by GRU after the Atrous Spatial Pyramid Pooling (ASSP) layer to combine the input feature maps. The convolution and GRU have sharable parameters, though, the latter has gates that enable/disable the contribution of each input feature map. The experiments on unseen test sets demonstrate that employing GRU instead of convolution would produce better segmentation results. The used datasets are public datasets provided by MedAI competition

    A functional pipeline framework for landmark identification on 3D surface extracted from volumetric data.

    No full text
    Landmarks, also known as feature points, are one of the important geometry primitives that describe the predominant characteristics of a surface. In this study we proposed a self-contained framework to generate landmarks on surfaces extracted from volumetric data. The framework is designed to be a three-fold pipeline structure. The pipeline comprises three phases which are surface construction, crest line extraction and landmark identification. With input as a volumetric data and output as landmarks, the pipeline takes in 3D raw data and produces a 0D geometry feature. In each phase we investigate existing methods, extend and tailor the methods to fit the pipeline design. The pipeline is designed to be functional as it is modularised to have a dedicated function in each phase. We extended the implicit surface polygonizer for surface construction in first phase, developed an alternative way to compute the gradient of maximal curvature for crest line extraction in second phase and finally we combine curvature information and K-means clustering method to identify the landmarks in the third phase. The implementations are firstly carried on a controlled environment, i.e. synthetic data, for proof of concept. Then the method is tested on a small scale data set and subsequently on huge data set. Issues and justifications are addressed accordingly for each phase
    corecore