3 research outputs found
Towards Automation and Human Assessment of Objective Skin Quantification
The goal of this study is to provide an objective criterion for computerised
skin quality assessment. Humans have been impacted by a variety of face
features. Utilising eye-tracking technology assists to get a better understanding
of human visual behaviour, this research examined the influence of face
characteristics on the quantification of skin evaluation and age estimation.
The results revealed that when facial features are apparent, individuals do
well in age estimation. Also, this research attempts to examine the performance
and perception of machine learning algorithms for various skin attributes.
Comparison of the traditional machine learning technique to deep
learning approaches. Support Vector Machine (SVM) and Convolutional Neural
Networks (CNNs) were used to evaluate classification algorithms, with
CNNs outperforming SVM. The primary difficulty in training deep learning
algorithms is the need of large-scale dataset. This thesis proposed two
high-resolution face datasets to address the requirement of face images for
research community to study face and skin quality. Additionally, the study
of machine-generated skin patches using Generative Adversarial Networks
(GANs) is conducted. Dermatologists confirmed the machine-generated images
by evaluating the fake and real images. Only 38% accurately predicted
the real from fake correctly. Lastly, the performance of human perception and
machine algorithm is compared using the heat-map from the eye-tracking experiment
and the machine learning prediction on age estimation. The finding
indicates that both humans and machines predict in a similar manner
Context-aware Facial Inpainting with GANs
Facial inpainting is a difficult problem due to the complex structural patterns of a face image. Using irregular hole masks to generate contextualised features in a face image is becoming increasingly important in image inpainting. Existing methods generate images using deep learning models, but aberrations persist. The reason for this is that key operations are required for feature information dissemination, such as feature extraction mechanisms, feature propagation, and feature regularizers, are frequently overlooked or ignored during the design stage. A comprehensive review is conducted to examine existing methods and identify the research gaps that serve as the foundation for this thesis.
The aim of this thesis is to develop novel facial inpainting algorithms with the capability of extracting contextualised features. First, Symmetric Skip Connection Wasserstein GAN (SWGAN) is proposed to inpaint high-resolution face images that are perceptually consistent with the rest of the image. Second, a perceptual adversarial Network (RMNet) is proposed to include feature extraction and feature propagation mechanisms that target missing regions while preserving visible ones. Third, a foreground-guided facial inpainting method is proposed with occlusion reasoning capability, which guides the model toward learning contextualised feature extraction and propagation while maintaining fidelity. Fourth, V-LinkNet is pro-posed that takes into account of the critical operations for information dissemination. Additionally, a standard protocol is introduced to prevent potential biases in performance evaluation of facial inpainting algorithms.
The experimental results show V-LinkNet achieved the best results with SSIM of 0.96 on the standard protocol. In conclusion, generating facial images with contextualised features is important to achieve realistic results in inpainted regions. Additionally, it is critical to consider the standard procedure while comparing different approaches. Finally, this thesis outlines the new insights and future directions of image inpainting