7 research outputs found

    Identifying Perceptually Salient Features on 2D Shapes

    Get PDF
    International audienceMaintaining the local style and scale of 2D shape features during deformation, such as when elongating, compressing, or bending a shape, is essential for interactive shape editing. To achieve this, a necessary first step is to develop a robust classification method able to detect salient shape features, if possible in a hierarchical manner. Our aim is to overcome the limitations of existing techniques, which are not always able to detect what a user immediately identifies as a shape feature. Therefore, we first conduct a user study enabling us to learn how shape features are perceived. We then propose and compare several algorithms, all based on the medial axis transform or similar skeletal representations, to identify relevant shape features from this perceptual viewpoint. We discuss the results of each algorithm and compare them with those of the user study, leading to a practical solution for computing hierarchies of salient features on 2D shapes

    Figurines, a multimodal framework for tangible storytelling

    Get PDF
    Author versionInternational audienceThis paper presents Figurines, an offline framework for narrative creation with tangible objects, designed to record storytelling sessions with children, teenagers or adults. This framework uses tangible diegetic objects to record a free narrative from up to two storytellers and construct a fully annotated representation of the story. This representation is composed of the 3D position and orientation of the figurines, the position of decor elements and interpretation of the storytellers' actions (facial expression, gestures and voice). While maintaining the playful dimension of the storytelling session, the system must tackle the challenge of recovering the free-form motion of the figurines and the storytellers in uncontrolled environments. To do so, we record the storytelling session using a hybrid setup with two RGB-D sensors and figurines augmented with IMU sensors. The first RGB-D sensor completes IMU information in order to identify figurines and tracks them as well as decor elements. It also tracks the storytellers jointly with the second RGB-D sensor. The framework has been used to record preliminary experiments to validate interest of our approach. These experiments evaluate figurine following and combination of motion and storyteller's voice, gesture and facial expressions. In a make-believe game, this story representation was re-targeted on virtual characters to produce an animated version of the story. The final goal of the Figurines framework is to enhance our understanding of the creative processes at work during immersive storytelling

    A system for creating virtual reality content from make-believe games

    Get PDF
    International audiencePretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose a system which assists the storyteller by generating a virtualized story from a recorded dialogue performed with 3D printed figurines. We capture the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to their virtual counterparts in the story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation

    Speaking with smile or disgust: data and models

    No full text
    International audienceThis paper presents preliminary analysis and modelling of facial motion capture data recorded on a speaker uttering nonsense syllables and sentences with various acted facial expressions. We analyze here the impact of facial expressions on articulation and determine prediction errors of simple models trained to map neutral articulation to the various facial expressions targeted. We show that movement of some speech organs such as the jaw and lower lip are relatively unaffected by the facial expressions considered here (smile, disgust) while others such as the movement of the upper lip or the jaw translation are quite perturbed. We also show that these perturbations are not simply additive, and that they depend on articulation

    Deformation Grammars: Hierarchical Constraint Preservation Under Deformation

    Get PDF
    International audienceDeformation grammars are a novel procedural framework enabling to sculpt hierarchical 3D models in an object-dependent manner. They process object deformations as symbols thanks to user-defined interpretation rules. We use them to define hierarchical deformation behaviors tailored for each model, and enabling any sculpting gesture to be interpreted as some adapted constraint-preserving deformation. A variety of object-specific constraints can be enforced using this framework, such as maintaining distributions of sub-parts, avoiding self-penetrations, or meeting semantic-based user-defined rules. The operations used to maintain constraints are kept transparent to the user, enabling them to focus on their design. We demonstrate the feasibility and the versatility of this approach on a variety of examples, implemented within an interactive sculpting system

    Sketching Folds: Developable Surfaces from Non-Planar Silhouettes

    Get PDF
    International audienceWe present the first sketch-based modeling method for developable surfaces with pre-designed folds, such as garments or leather products. The main challenge we address for building folded surfaces from sketches is that silhouette strokes on the sketch correspond to discontinuous sets of non-planar curves on the 3D model. We introduce a new zippering algorithm for progressively identifying silhouette edges on the model and tying them to silhouette strokes. Our solution ensures that the strokes are fully covered and optimally sampled by the model. This new method, interleaved with developability optimization steps, is implemented in a multi-view sketching system where the user can sketch the contours of internal folds in addition to the usual silhouettes, borders and seam lines. All strokes are interpreted as hard constraints, while developability is only optimized. The developability error map we provide then enables users to add local seams or darts where needed and progressively improve their design. This makes our method robust even to coarse input, for which no fully developable solution exists

    Making Movies from Make-Believe Games

    Get PDF
    International audiencePretend play is a storytelling technique, naturally used from very young ages, which relies on object substitution to represent the characters of the imagined story. We propose "Make-believe", a system for making movies from pretend play by using 3D printed figurines as props. We capture the rigid motions of the figurines and the gestures and facial expressions of the storyteller using Kinect cameras and IMU sensors and transfer them to the virtual story-world. As a proof-of-concept, we demonstrate our system with an improvised story involving a prince and a witch, which was successfully recorded and transferred into 3D animation
    corecore