3,987 research outputs found

    Connecting Look and Feel: Associating the visual and tactile properties of physical materials

    Full text link
    For machines to interact with the physical world, they must understand the physical properties of objects and materials they encounter. We use fabrics as an example of a deformable material with a rich set of mechanical properties. A thin flexible fabric, when draped, tends to look different from a heavy stiff fabric. It also feels different when touched. Using a collection of 118 fabric sample, we captured color and depth images of draped fabrics along with tactile data from a high resolution touch sensor. We then sought to associate the information from vision and touch by jointly training CNNs across the three modalities. Through the CNN, each input, regardless of the modality, generates an embedding vector that records the fabric's physical property. By comparing the embeddings, our system is able to look at a fabric image and predict how it will feel, and vice versa. We also show that a system jointly trained on vision and touch data can outperform a similar system trained only on visual data when tested purely with visual inputs

    NON-RIGID BODY MECHANICAL PROPERTY RECOVERY FROM IMAGES AND VIDEOS

    Get PDF
    Material property has great importance in surgical simulation and virtual reality. The mechanical properties of the human soft tissue are critical to characterize the tissue deformation of each patient. Studies have shown that the tissue stiffness described by the tissue properties may indicate abnormal pathological process. The (recovered) elasticity parameters can assist surgeons to perform better pre-op surgical planning and enable medical robots to carry out personalized surgical procedures. Traditional elasticity parameters estimation methods rely largely on known external forces measured by special devices and strain field estimated by landmarks on the deformable bodies. Or they are limited to mechanical property estimation for quasi-static deformation. For virtual reality applications such as virtual try-on, garment material capturing is of equal significance as the geometry reconstruction. In this thesis, I present novel approaches for automatically estimating the material properties of soft bodies from images or from a video capturing the motion of the deformable body. I use a coupled simulation-optimization-identification framework to deform one soft body at its original, non-deformed state to match the deformed geometry of the same object in its deformed state. The optimal set of material parameters is thereby determined by minimizing the error metric function. This method can simultaneously recover the elasticity parameters of multiple regions of soft bodies using Finite Element Method-based simulation (of either linear or nonlinear materials undergoing large deformation) and particle-swarm optimization methods. I demonstrate the effectiveness of this approach on real-time interaction with virtual organs in patient-specific surgical simulation, using parameters acquired from low-resolution medical images. With the recovered elasticity parameters and the age of the prostate cancer patients as features, I build a cancer grading and staging classifier. The classifier achieves up to 91% for predicting cancer T-Stage and 88% for predicting Gleason score. To recover the mechanical properties of soft bodies from a video, I propose a method which couples statistical graphical model with FEM simulation. Using this method, I can recover the material properties of a soft ball from a high-speed camera video that captures the motion of the ball. Furthermore, I extend the material recovery framework to fabric material identification. I propose a novel method for garment material extraction from a single-view image and a learning based cloth material recovery method from a video recording the motion of the cloth. Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, I propose a method that can compute a 3D model of a human body and its outfit from a single photograph with little human interaction. My proposed learning-based cloth material type recovery method exploits simulated data-set and deep neural network. I demonstrate the effectiveness of my algorithms by re-purposing the reconstructed garments for virtual try-on, garment transfer, and cloth animation on digital characters. With the recovered mechanical properties, one can construct a virtual world with soft objects exhibiting real-world behaviors.Doctor of Philosoph

    How Will It Drape Like? Capturing Fabric Mechanics from Depth Images

    Full text link
    We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera. Our approach enables to create mechanically-correct digital representations of real-world textile materials, which is a fundamental step for many interactive design and engineering applications. As opposed to existing capture methods, which typically require expensive setups, video sequences, or manual intervention, our solution can capture at scale, is agnostic to the optical appearance of the textile, and facilitates fabric arrangement by non-expert operators. To this end, we propose a sim-to-real strategy to train a learning-based framework that can take as input one or multiple images and outputs a full set of mechanical parameters. Thanks to carefully designed data augmentation and transfer learning protocols, our solution generalizes to real images despite being trained only on synthetic data, hence successfully closing the sim-to-real loop.Key in our work is to demonstrate that evaluating the regression accuracy based on the similarity at parameter space leads to an inaccurate distances that do not match the human perception. To overcome this, we propose a novel metric for fabric drape similarity that operates on the image domain instead on the parameter space, allowing us to evaluate our estimation within the context of a similarity rank. We show that out metric correlates with human judgments about the perception of drape similarity, and that our model predictions produce perceptually accurate results compared to the ground truth parameters.Comment: 12 pages, 12 figures. Accepted to EUROGRAPHICS 2023. Project website: https://carlosrodriguezpardo.es/projects/MechFromDepth

    Calipso: Physics-based Image and Video Editing through CAD Model Proxies

    Get PDF
    We present Calipso, an interactive method for editing images and videos in a physically-coherent manner. Our main idea is to realize physics-based manipulations by running a full physics simulation on proxy geometries given by non-rigidly aligned CAD models. Running these simulations allows us to apply new, unseen forces to move or deform selected objects, change physical parameters such as mass or elasticity, or even add entire new objects that interact with the rest of the underlying scene. In Calipso, the user makes edits directly in 3D; these edits are processed by the simulation and then transfered to the target 2D content using shape-to-image correspondences in a photo-realistic rendering process. To align the CAD models, we introduce an efficient CAD-to-image alignment procedure that jointly minimizes for rigid and non-rigid alignment while preserving the high-level structure of the input shape. Moreover, the user can choose to exploit image flow to estimate scene motion, producing coherent physical behavior with ambient dynamics. We demonstrate Calipso's physics-based editing on a wide range of examples producing myriad physical behavior while preserving geometric and visual consistency.Comment: 11 page
    • …
    corecore