7,399 research outputs found

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Improving Landmark Localization with Semi-Supervised Learning

    Full text link
    We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5\% of labeled images we outperform previous state-of-the-art trained on the AFLW dataset.Comment: Published as a conference paper in CVPR 201

    DESIGNING EYE TRACKING ALGORITHM FOR PARTNER-ASSISTED EYE SCANNING KEYBOARD FOR PHYSICALLY CHALLENGED PEOPLE

    Get PDF
    The proposed research work focuses on building a keyboard through designing an algorithm for eye movement detection using the partner-assisted scanning technique. The study covers all stages of gesture recognition, from data acquisition to eye detection and tracking, and finally classification. With the presence of many techniques to implement the gesture recognition stages, the main objective of this research work is implementing the simple and less expensive technique that produces the best possible results with a high level of accuracy. The results, finally, are compared with similar works done recently to prove the efficiency in implementation of the proposed algorithm. The system starts with the calibration phase, where a face detection algorithm is designed to detect the user‟s face by a trained support vector machine. Then, features are extracted, after which tracking of the eyes is possible by skin-colour segmentation. A couple of other operations were performed. The overall system is a keyboard that works by eye movement, through the partner-assisted scanning technique. A good level of accuracy was achieved, and a couple of alternative methods were implemented and compared. This keyboard adds to the research field, with a new and novel combination of techniques for eye detection and tracking. Also, the developed keyboard helps bridge the gap between physical paralysis and leading a normal life. This system can be used as comparison with other proposed algorithms for eye detection, and might be used as a proof for the efficiency of combining a number of different techniques into one algorithm. Also, it strongly supports the effectiveness of machine learning and appearance-based algorithms
    • …
    corecore