2 research outputs found
Facial 3D Model Registration Under Occlusions With SensiblePoints-based Reinforced Hypothesis Refinement
Registering a 3D facial model to a 2D image under occlusion is difficult.
First, not all of the detected facial landmarks are accurate under occlusions.
Second, the number of reliable landmarks may not be enough to constrain the
problem. We propose a method to synthesize additional points (SensiblePoints)
to create pose hypotheses. The visual clues extracted from the fiducial points,
non-fiducial points, and facial contour are jointly employed to verify the
hypotheses. We define a reward function to measure whether the projected dense
3D model is well-aligned with the confidence maps generated by two fully
convolutional networks, and use the function to train recurrent policy networks
to move the SensiblePoints. The same reward function is employed in testing to
select the best hypothesis from a candidate pool of hypotheses. Experimentation
demonstrates that the proposed approach is very promising in solving the facial
model registration problem under occlusion.Comment: Accepted in International Joint Conference on Biometrics (IJCB) 201
Convolutional Point-set Representation: A Convolutional Bridge Between a Densely Annotated Image and 3D Face Alignment
We present a robust method for estimating the facial pose and shape
information from a densely annotated facial image. The method relies on
Convolutional Point-set Representation (CPR), a carefully designed matrix
representation to summarize different layers of information encoded in the set
of detected points in the annotated image. The CPR disentangles the
dependencies of shape and different pose parameters and enables updating
different parameters in a sequential manner via convolutional neural networks
and recurrent layers. When updating the pose parameters, we sample reprojection
errors along with a predicted direction and update the parameters based on the
pattern of reprojection errors. This technique boosts the model's capability in
searching a local minimum under challenging scenarios. We also demonstrate that
annotation from different sources can be merged under the framework of CPR and
contributes to outperforming the current state-of-the-art solutions for 3D face
alignment. Experiments indicate the proposed CPRFA (CPR-based Face Alignment)
significantly improves 3D alignment accuracy when the densely annotated image
contains noise and missing values, which is common under "in-the-wild"
acquisition scenarios.Comment: Preprint Submitte