3,772 research outputs found

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page

    Learning to Navigate Cloth using Haptics

    Full text link
    We present a controller that allows an arm-like manipulator to navigate deformable cloth garments in simulation through the use of haptic information. The main challenge of such a controller is to avoid getting tangled in, tearing or punching through the deforming cloth. Our controller aggregates force information from a number of haptic-sensing spheres all along the manipulator for guidance. Based on haptic forces, each individual sphere updates its target location, and the conflicts that arise between this set of desired positions is resolved by solving an inverse kinematic problem with constraints. Reinforcement learning is used to train the controller for a single haptic-sensing sphere, where a training run is terminated (and thus penalized) when large forces are detected due to contact between the sphere and a simplified model of the cloth. In simulation, we demonstrate successful navigation of a robotic arm through a variety of garments, including an isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two baseline controllers: one without haptics and another that was trained based on large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A. Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm

    Sim2Real Neural Controllers for Physics-based Robotic Deployment of Deformable Linear Objects

    Full text link
    Deformable linear objects (DLOs), such as rods, cables, and ropes, play important roles in daily life. However, manipulation of DLOs is challenging as large geometrically nonlinear deformations may occur during the manipulation process. This problem is made even more difficult as the different deformation modes (e.g., stretching, bending, and twisting) may result in elastic instabilities during manipulation. In this paper, we formulate a physics-guided data-driven method to solve a challenging manipulation task -- accurately deploying a DLO (an elastic rod) onto a rigid substrate along various prescribed patterns. Our framework combines machine learning, scaling analysis, and physical simulations to develop a physics-based neural controller for deployment. We explore the complex interplay between the gravitational and elastic energies of the manipulated DLO and obtain a control method for DLO deployment that is robust against friction and material properties. Out of the numerous geometrical and material properties of the rod and substrate, we show that only three non-dimensional parameters are needed to describe the deployment process with physical analysis. Therefore, the essence of the controlling law for the manipulation task can be constructed with a low-dimensional model, drastically increasing the computation speed. The effectiveness of our optimal control scheme is shown through a comprehensive robotic case study comparing against a heuristic control method for deploying rods for a wide variety of patterns. In addition to this, we also showcase the practicality of our control scheme by having a robot accomplish challenging high-level tasks such as mimicking human handwriting, cable placement, and tying knots.Comment: YouTube video: https://youtu.be/OSD6dhOgyMA?feature=share

    Vector offset operators for deformable organic objects.

    Get PDF
    Many natural materials and most of living tissues exhibit complex deformable behaviours that may be characteriseda s organic. In computer animation, deformable organic material behaviour is needed for the development of characters and scenes based on living creatures and natural phenomena. This study addresses the problem of deformable organic material behaviour in computer animated objects. The focus of this study is concentrated on problems inherent in geometry based deformation techniques, such as non-intuitive interaction and difficulty in achieving realism. Further, the focus is concentrated on problems inherent in physically based deformation techniques, such as inefficiency and difficulty in enforcing spatial and temporal constraints. The main objective in this study is to find a general and efficient solution to interaction and animation of deformable 3D objects with natural organic material properties and constrainable behaviour. The solution must provide an interaction and animation framework suitable for the creation of animated deformable characters. An implementation of physical organic material properties such as plasticity, elasticity and iscoelasticity can provide the basis for an organic deformation model. An efficient approach to stress and strain control is introduced with a deformation tool named Vector Offset Operator. Stress / strain graphs control the elastoplastic behaviour of the model. Strain creep, stress relaxation and hysteresis graphs control the viscoelastic behaviour of the model. External forces may be applied using motion paths equipped with momentum / time graphs. Finally, spatial and temporal constraints are applied directly on vector operators. The suggested generic deformation tool introduces an intermediate layer between user interaction, deformation, elastoplastic and viscoelastic material behaviour and spatial and temporal constraints. This results in an efficient approach to deformation, frees object representation from deformation, facilitates the application of constraints and enables further development

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    A constraint-based methodology for product design with virtual reality

    Get PDF
    This paper presents a constraint-based methodology for product design with advanced virtual reality technologies. A hierarchically structured and constraint-based data model is developed to support product design from features to parts and further to assemblies in a VR environment. Product design in the VR environment is performed in an intuitive manner through precise constraint-based manipulations. Constraint-based manipulations are accompanied with automatic constraint recognition and precise constraint satisfaction to establish constraints between objects, and are further realized by allowable motions for precise 3D interactions in the VR environment. The allowable motions are represented as a mathematical matrix and derived from constraints between objects by constraint solving. A procedure-based degrees-of-freedom combination approach is presented for 3D constraint solving. A rule-based constraint recognition engine is developed for both constraint-based manipulations and implicitly incorporating constraints into the VR environment. An intuitive method is presented for recognizing pairs of mating features between assembly components. Examples are presented to demonstrate the efficacy of the proposed methodology

    Semantic Scene Understanding for Prediction of Action Effects in Humanoid Robot Manipulation Tasks

    Get PDF
    corecore