553 research outputs found

    Sketching-out virtual humans: A smart interface for human modelling and animation

    Get PDF
    In this paper, we present a fast and intuitive interface for sketching out 3D virtual humans and animation. The user draws stick figure key frames first and chooses one for “fleshing-out” with freehand body contours. The system automatically constructs a plausible 3D skin surface from the rendered figure, and maps it onto the posed stick figures to produce the 3D character animation. A “creative model-based method” is developed, which performs a human perception process to generate 3D human bodies of various body sizes, shapes and fat distributions. In this approach, an anatomical 3D generic model has been created with three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially through rigid morphing, fatness morphing, and surface fitting to match the original 2D sketch. An auto-beautification function is also offered to regularise the 3D asymmetrical bodies from users’ imperfect figure sketches. Our current system delivers character animation in various forms, including articulated figure animation, 3D mesh model animation, 2D contour figure animation, and even 2D NPR animation with personalised drawing styles. The system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Sketch-based virtual human modelling and animation

    Get PDF
    Animated virtual humans created by skilled artists play a remarkable role in today’s public entertainment. However, ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. We developed a new method and a novel sketching interface, which enable anyone who can draw to “sketch-out” 3D virtual humans and animation. We devised a “Stick FigureFleshing-outSkin Mapping” graphical pipeline, which decomposes the complexity of figure drawing and considerably boosts the modelling and animation efficiency. We developed a gesture-based method for 3D pose reconstruction from 2D stick figure drawings. We investigated a “Creative Model-based Method”, which performs a human perception process to transfer users’ 2D freehand sketches into 3D human bodies of various body sizes, shapes and fat distributions. Our current system supports character animation in various forms including articulated figure animation, 3D mesh model animation, and 2D contour/NPR animation with personalised drawing styles. Moreover, this interface also supports sketch-based crowd animation and 2D storyboarding of 3D multiple character interactions. A preliminary user study was conducted to support the overall system design. Our system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    A Survey of Sketch Based Modeling Systems

    Get PDF

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition

    Integrating Multiple Sketch Recognition Methods to Improve Accuracy and Speed

    Get PDF
    Sketch recognition is the computer understanding of hand drawn diagrams. Recognizing sketches instantaneously is necessary to build beautiful interfaces with real time feedback. There are various techniques to quickly recognize sketches into ten or twenty classes. However for much larger datasets of sketches from a large number of classes, these existing techniques can take an extended period of time to accurately classify an incoming sketch and require significant computational overhead. Thus, to make classification of large datasets feasible, we propose using multiple stages of recognition. In the initial stage, gesture-based feature values are calculated and the trained model is used to classify the incoming sketch. Sketches with an accuracy less than a threshold value, go through a second stage of geometric recognition techniques. In the second geometric stage, the sketch is segmented, and sent to shape-specific recognizers. The sketches are matched against predefined shape descriptions, and confidence values are calculated. The system outputs a list of classes that the sketch could be classified as, along with the accuracy, and precision for each sketch. This process both significantly reduces the time taken to classify such huge datasets of sketches, and increases both the accuracy and precision of the recognition
    • 

    corecore