1,389 research outputs found

    DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling

    Get PDF
    Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this paper, we propose a deep learning based sketching system for 3D face and caricature modeling. This system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. Our network fuses both CNN and shape based features of the input sketch, and has two independent branches of fully connected layers generating independent subsets of coefficients for a bilinear face representation. Our system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201

    Neural networks based recognition of 3D freeform surface from 2D sketch

    Get PDF
    In this paper, the Back Propagation (BP) network and Radial Basis Function (RBF) neural network are employed to recognize and reconstruct 3D freeform surface from 2D freehand sketch. Some tests and comparison experiments have been made to evaluate the performance for the reconstruction of freeform surfaces of both networks using simulation data. The experimental results show that both BP and RBF based freeform surface reconstruction methods are feasible; and the RBF network performed better. The RBF average point error between the reconstructed 3D surface data and the desired 3D surface data is less than 0.05 over all our 75 test sample data

    Interpretation of overtracing freehand sketching for geometric shapes

    Get PDF
    This paper presents a novel method for interpreting overtracing freehand sketch. The overtracing strokes are interpreted as sketch content and are used to generate 2D geometric primitives. The approach consists of four stages: stroke classification, strokes grouping and fitting, 2D tidy-up with endpoint clustering and parallelism correction, and in-context interpretation. Strokes are first classified into lines and curves by a linearity test. It is followed by an innovative strokes grouping process that handles lines and curves separately. The grouped strokes are fitted with 2D geometry and further tidied-up with endpoint clustering and parallelism correction. Finally, the in-context interpretation is applied to detect incorrect stroke interpretation based on geometry constraints and to suggest a most plausible correction based on the overall sketch context. The interpretation ensures sketched strokes to be interpreted into meaningful output. The interface overcomes the limitation where only a single line drawing can be sketched out as in most existing sketching programs, meanwhile is more intuitive to the user

    Direct and gestural interaction with relief: A 2.5D shape display

    Get PDF
    Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of freehand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system

    Freehand-Steering Locomotion Techniques for Immersive Virtual Environments: A Comparative Evaluation

    Get PDF
    Virtual reality has achieved significant popularity in recent years, and allowing users to move freely within an immersive virtual world has become an important factor critical to realize. The user’s interactions are generally designed to increase the perceived realism, but the locomotion techniques and how these affect the user’s task performance still represent an open issue, much discussed in the literature. In this article, we evaluate the efficiency and effectiveness of, and user preferences relating to, freehand locomotion techniques designed for an immersive virtual environment performed through hand gestures tracked by a sensor placed in the egocentric position and experienced through a head-mounted display. Three freehand locomotion techniques have been implemented and compared with each other, and with a baseline technique based on a controller, through qualitative and quantitative measures. An extensive user study conducted with 60 subjects shows that the proposed methods have a performance comparable to the use of the controller, further revealing the users’ preference for decoupling the locomotion in sub-tasks, even if this means renouncing precision and adapting the interaction to the possibilities of the tracker sensor

    Understanding 3D mid-air hand gestures with interactive surfaces and displays: a systematic literature review

    Get PDF
    3D gesture based systems are becoming ubiquitous and there are many mid-air hand gestures that exist for interacting with digital surfaces and displays. There is no well defined gesture set for 3D mid-air hand gestures which makes it difficult to develop applications that have consistent gestures. To understand what gestures exist we conducted the first comprehensive systematic literature review on mid-air hand gestures following existing research methods. The results of the review identified 65 papers where the mid-air hand gestures supported tasks for selection, navigation, and manipulation. We also classified the gestures according to a gesture classification scheme and identified how these gestures have been empirically evaluated. The results of the review provide a richer understanding of what mid-air hand gestures have been designed, implemented, and evaluated in the literature which can help developers design better user experiences for digital interactive surfaces and displays

    Usability Analysis of an off-the-shelf Hand Posture Estimation Sensor for Freehand Physical Interaction in Egocentric Mixed Reality

    Get PDF
    This paper explores freehand physical interaction in egocentric Mixed Reality by performing a usability study on the use of hand posture estimation sensors. We report on precision, interactivity and usability metrics in a task-based user study, exploring the importance of additional visual cues when interacting. A total of 750 interactions were recorded from 30 participants performing 5 different interaction tasks (Move, Rotate: Pitch (Y axis) and Yaw (Z axis), Uniform scale: enlarge and shrink). Additional visual cues resulted in an average shorter time to interact, however, no consistent statistical differences were found in between groups for performance and precision results. The group with additional visual cues gave the system and average System Usability Scale (SUS) score of 72.33 (SD = 16.24) while the other scored a 68.0 (SD = 18.68). Overall, additional visual cues made the system being perceived as more usable, despite the fact that the use of these two different conditions had limited effect on precision and interactivity metrics
    corecore