960 research outputs found

    Planning Framework for Robotic Pizza Dough Stretching with a Rolling Pin

    Get PDF
    Stretching a pizza dough with a rolling pin is a nonprehensile manipulation. Since the object is deformable, force closure cannot be established, and the manipulation is carried out in a nonprehensile way. The framework of this pizza dough stretching application that is explained in this chapter consists of four sub-procedures: (i) recognition of the pizza dough on a plate, (ii) planning the necessary steps to shape the pizza dough to the desired form, (iii) path generation for a rolling pin to execute the output of the pizza dough planner, and (iv) inverse kinematics for the bi-manual robot to grasp and control the rolling pin properly. Using the deformable object model described in Chap. 3, each sub-procedure of the proposed framework is explained sequentially

    Tools for fluid simulation control in computer graphics

    Full text link
    L’animation basée sur la physique peut générer des systèmes aux comportements complexes et réalistes. Malheureusement, contrôler de tels systèmes est une tâche ardue. Dans le cas de la simulation de fluide, le processus de contrôle est particulièrement complexe. Bien que de nombreuses méthodes et outils ont été mis au point pour simuler et faire le rendu de fluides, trop peu de méthodes offrent un contrôle efficace et intuitif sur une simulation de fluide. Étant donné que le coût associé au contrôle vient souvent s’additionner au coût de la simulation, appliquer un contrôle sur une simulation à plus haute résolution rallonge chaque itération du processus de création. Afin d’accélérer ce processus, l’édition peut se faire sur une simulation basse résolution moins coûteuse. Nous pouvons donc considérer que la création d’un fluide contrôlé peut se diviser en deux phases: une phase de contrôle durant laquelle un artiste modifie le comportement d’une simulation basse résolution, et une phase d’augmentation de détail durant laquelle une version haute résolution de cette simulation est générée. Cette thèse présente deux projets, chacun contribuant à l’état de l’art relié à chacune de ces deux phases. Dans un premier temps, on introduit un nouveau système de contrôle de liquide représenté par un modèle particulaire. À l’aide de ce système, un artiste peut sélectionner dans une base de données une parcelle de liquide animé précalculée. Cette parcelle peut ensuite être placée dans une simulation afin d’en modifier son comportement. À chaque pas de simulation, notre système utilise la liste de parcelles actives afin de reproduire localement la vision de l’artiste. Une interface graphique intuitive a été développée, inspirée par les logiciels de montage vidéo, et permettant à un utilisateur non expert de simplement éditer une simulation de liquide. Dans un second temps, une méthode d’augmentation de détail est décrite. Nous proposons d’ajouter une étape supplémentaire de suivi après l’étape de projection du champ de vitesse d’une simulation de fumée eulérienne classique. Durant cette étape, un champ de perturbations de vitesse non-divergent est calculé, résultant en une meilleure correspondance des densités à haute et à basse résolution. L’animation de fumée résultante reproduit fidèlement l’aspect grossier de la simulation d’entrée, tout en étant augmentée à l’aide de détails simulés.Physics-based animation can generate dynamic systems of very complex and realistic behaviors. Unfortunately, controlling them is a daunting task. In particular, fluid simulation brings up particularly difficult problems to the control process. Although many methods and tools have been developed to convincingly simulate and render fluids, too few methods provide efficient and intuitive control over a simulation. Since control often comes with extra computations on top of the simulation cost, art-directing a high-resolution simulation leads to long iterations of the creative process. In order to shorten this process, editing could be performed on a faster, low-resolution model. Therefore, we can consider that the process of generating an art-directed fluid could be split into two stages: a control stage during which an artist modifies the behavior of a low-resolution simulation, and an upresolution stage during which a final high-resolution version of this simulation is driven. This thesis presents two projects, each one improving on the state of the art related to each of these two stages. First, we introduce a new particle-based liquid control system. Using this system, an artist selects patches of precomputed liquid animations from a database, and places them in a simulation to modify its behavior. At each simulation time step, our system uses these entities to control the simulation in order to reproduce the artist’s vision. An intuitive graphical user interface inspired by video editing tools has been developed, allowing a nontechnical user to simply edit a liquid animation. Second, a tracking solution for smoke upresolution is described. We propose to add an extra tracking step after the projection of a classical Eulerian smoke simulation. During this step, we solve for a divergence-free velocity perturbation field resulting in a better matching of the low-frequency density distribution between the low-resolution guide and the high-resolution simulation. The resulting smoke animation faithfully reproduces the coarse aspect of the low-resolution input, while being enhanced with simulated small-scale details

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    Towards an Inclusive Virtual Dressing Room for Wheelchair-Bound Customers

    Get PDF

    NON-RIGID BODY MECHANICAL PROPERTY RECOVERY FROM IMAGES AND VIDEOS

    Get PDF
    Material property has great importance in surgical simulation and virtual reality. The mechanical properties of the human soft tissue are critical to characterize the tissue deformation of each patient. Studies have shown that the tissue stiffness described by the tissue properties may indicate abnormal pathological process. The (recovered) elasticity parameters can assist surgeons to perform better pre-op surgical planning and enable medical robots to carry out personalized surgical procedures. Traditional elasticity parameters estimation methods rely largely on known external forces measured by special devices and strain field estimated by landmarks on the deformable bodies. Or they are limited to mechanical property estimation for quasi-static deformation. For virtual reality applications such as virtual try-on, garment material capturing is of equal significance as the geometry reconstruction. In this thesis, I present novel approaches for automatically estimating the material properties of soft bodies from images or from a video capturing the motion of the deformable body. I use a coupled simulation-optimization-identification framework to deform one soft body at its original, non-deformed state to match the deformed geometry of the same object in its deformed state. The optimal set of material parameters is thereby determined by minimizing the error metric function. This method can simultaneously recover the elasticity parameters of multiple regions of soft bodies using Finite Element Method-based simulation (of either linear or nonlinear materials undergoing large deformation) and particle-swarm optimization methods. I demonstrate the effectiveness of this approach on real-time interaction with virtual organs in patient-specific surgical simulation, using parameters acquired from low-resolution medical images. With the recovered elasticity parameters and the age of the prostate cancer patients as features, I build a cancer grading and staging classifier. The classifier achieves up to 91% for predicting cancer T-Stage and 88% for predicting Gleason score. To recover the mechanical properties of soft bodies from a video, I propose a method which couples statistical graphical model with FEM simulation. Using this method, I can recover the material properties of a soft ball from a high-speed camera video that captures the motion of the ball. Furthermore, I extend the material recovery framework to fabric material identification. I propose a novel method for garment material extraction from a single-view image and a learning based cloth material recovery method from a video recording the motion of the cloth. Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, I propose a method that can compute a 3D model of a human body and its outfit from a single photograph with little human interaction. My proposed learning-based cloth material type recovery method exploits simulated data-set and deep neural network. I demonstrate the effectiveness of my algorithms by re-purposing the reconstructed garments for virtual try-on, garment transfer, and cloth animation on digital characters. With the recovered mechanical properties, one can construct a virtual world with soft objects exhibiting real-world behaviors.Doctor of Philosoph

    State of the Art in Skinning Techniques for Articulated Deformable Characters

    Get PDF
    Skinning is an indispensable component of the content creation pipeline for character animation in the context of feature films, video games, and in the special effects industry. Skinning techniques define the deformation of the character skin for every animation frame according to the current state of skeletal joints. In this state of the art report, we focus on the existing research in the areas of skeleton-based deformation, volume preserving techniques and physically based skinning methods. We also summarize the recent research in deformable and soft bodies simulations for articulated characters, and discuss various geometric and examples-based approaches

    Human Pose Estimation from Monocular Images : a Comprehensive Survey

    Get PDF
    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problema into several modules: feature extraction and description, human body models, and modelin methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used
    • …
    corecore