14 research outputs found

    Prominent Attribute Modification using Attribute Dependent Generative Adversarial Network

    Full text link
    Modifying the facial images with desired attributes is important, though challenging tasks in computer vision, where it aims to modify single or multiple attributes of the face image. Some of the existing methods are either based on attribute independent approaches where the modification is done in the latent representation or attribute dependent approaches. The attribute independent methods are limited in performance as they require the desired paired data for changing the desired attributes. Secondly, the attribute independent constraint may result in the loss of information and, hence, fail in generating the required attributes in the face image. In contrast, the attribute dependent approaches are effective as these approaches are capable of modifying the required features along with preserving the information in the given image. However, attribute dependent approaches are sensitive and require a careful model design in generating high-quality results. To address this problem, we propose an attribute dependent face modification approach. The proposed approach is based on two generators and two discriminators that utilize the binary as well as the real representation of the attributes and, in return, generate high-quality attribute modification results. Experiments on the CelebA dataset show that our method effectively performs the multiple attribute editing with preserving other facial details intactly

    SCUT-FBP5500: A Diverse Benchmark Dataset for Multi-Paradigm Facial Beauty Prediction

    Full text link
    Facial beauty prediction (FBP) is a significant visual recognition problem to make assessment of facial attractiveness that is consistent to human perception. To tackle this problem, various data-driven models, especially state-of-the-art deep learning techniques, were introduced, and benchmark dataset become one of the essential elements to achieve FBP. Previous works have formulated the recognition of facial beauty as a specific supervised learning problem of classification, regression or ranking, which indicates that FBP is intrinsically a computation problem with multiple paradigms. However, most of FBP benchmark datasets were built under specific computation constrains, which limits the performance and flexibility of the computational model trained on the dataset. In this paper, we argue that FBP is a multi-paradigm computation problem, and propose a new diverse benchmark dataset, called SCUT-FBP5500, to achieve multi-paradigm facial beauty prediction. The SCUT-FBP5500 dataset has totally 5500 frontal faces with diverse properties (male/female, Asian/Caucasian, ages) and diverse labels (face landmarks, beauty scores within [1,~5], beauty score distribution), which allows different computational models with different FBP paradigms, such as appearance-based/shape-based facial beauty classification/regression model for male/female of Asian/Caucasian. We evaluated the SCUT-FBP5500 dataset for FBP using different combinations of feature and predictor, and various deep learning methods. The results indicates the improvement of FBP and the potential applications based on the SCUT-FBP5500.Comment: 6 pages, 14 figures, conference pape

    Some like it hot - visual guidance for preference prediction

    Full text link
    For people first impressions of someone are of determining importance. They are hard to alter through further information. This begs the question if a computer can reach the same judgement. Earlier research has already pointed out that age, gender, and average attractiveness can be estimated with reasonable precision. We improve the state-of-the-art, but also predict - based on someone's known preferences - how much that particular person is attracted to a novel face. Our computational pipeline comprises a face detector, convolutional neural networks for the extraction of deep features, standard support vector regression for gender, age and facial beauty, and - as the main novelties - visual regularized collaborative filtering to infer inter-person preferences as well as a novel regression technique for handling visual queries without rating history. We validate the method using a very large dataset from a dating site as well as images from celebrities. Our experiments yield convincing results, i.e. we predict 76% of the ratings correctly solely based on an image, and reveal some sociologically relevant conclusions. We also validate our collaborative filtering solution on the standard MovieLens rating dataset, augmented with movie posters, to predict an individual's movie rating. We demonstrate our algorithms on howhot.io which went viral around the Internet with more than 50 million pictures evaluated in the first month.Comment: accepted for publication at CVPR 201

    Revealing the Shopper Experience of Using a "Magic Mirror" Augmented Reality Make-Up Application

    Get PDF
    Virtual try-ons have recently emerged as a new form of Augmented Reality application. Using motion caption techniques, such apps show virtual elements like make-up or accessories superimposed over the real image of a person as if they were actually wearing them. However, there is as of yet little understanding about their value for providing a viable experience. We report on an in-situ study, observing how shoppers approach and respond to such a "Magic Mirror" in a store. Our findings show that after the initial surprise, the virtual try-on resulted in much exploration when shoppers looked at themselves on a display integrated in the make-up counter. Behavior tracking data from interactions with the mirror supported this. Moreover, survey data measured perceptions of augmentation as well as hedonic and utilitarian value of the app and suggested the augmented experience was perceived to be playful and credible while also acting as a strong driver for future behavior. We discuss opportunities and challenges that such technology brings for shopping and other domains

    Asian female facial beauty prediction using deep neural networks via transfer learning and multi-channel feature fusion

    Get PDF
    Facial beauty plays an important role in many fields today, such as digital entertainment, facial beautification surgery and etc. However, the facial beauty prediction task has the challenges of insufficient training datasets, low performance of traditional methods, and rarely takes advantage of the feature learning of Convolutional Neural Networks. In this paper, a transfer learning based CNN method that integrates multiple channel features is utilized for Asian female facial beauty prediction tasks. Firstly, a Large-Scale Asian Female Beauty Dataset (LSAFBD) with a more reasonable distribution has been established. Secondly, in order to improve CNN's self-learning ability of facial beauty prediction task, an effective CNN using a novel Softmax-MSE loss function and a double activation layer has been proposed. Then, a data augmentation method and transfer learning strategy were also utilized to mitigate the impact of insufficient data on proposed CNN performance. Finally, a multi-channel feature fusion method was explored to further optimize the proposed CNN model. Experimental results show that the proposed method is superior to traditional learning method combating the Asian female FBP task. Compared with other state-of-the-art CNN models, the proposed CNN model can improve the rank-1 recognition rate from 60.40% to 64.85%, and the pearson correlation coefficient from 0.8594 to 0.8829 on the LSAFBD and obtained 0.9200 regression prediction results on the SCUT dataset

    Wow you are so beautiful today

    No full text
    10.1145/2502081.2502258MM 2013 - Proceedings of the 2013 ACM Multimedia Conference437-43
    corecore