428 research outputs found

    Deep Detail Enhancement for Any Garment

    Get PDF
    Creating fine garment details requires significant efforts and huge computational resources. In contrast, a coarse shape may be easy to acquire in many scenarios (e.g., via low-resolution physically-based simulation, linear blend skinning driven by skeletal motion, portable scanners). In this paper, we show how to enhance, in a data-driven manner, rich yet plausible details starting from a coarse garment geometry. Once the parameterization of the garment is given, we formulate the task as a style transfer problem over the space of associated normal maps. In order to facilitate generalization across garment types and character motions, we introduce a patch-based formulation, that produces high-resolution details by matching a Gram matrix based style loss, to hallucinate geometric details (i.e., wrinkle density and shape). We extensively evaluate our method on a variety of production scenarios and show that our method is simple, light-weight, efficient, and generalizes across underlying garment types, sewing patterns, and body motion.Comment: 12 page

    Generalized Linear Models for Geometrical Current predictors. An application to predict garment fit

    Get PDF
    The aim of this paper is to model an ordinal response variable in terms of vector-valued functional data included on a vector-valued RKHS. In particular, we focus on the vector-valued RKHS obtained when a geometrical object (body) is characterized by a current and on the ordinal regression model. A common way to solve this problem in functional data analysis is to express the data in the orthonormal basis given by decomposition of the covariance operator. But our data present very important differences with respect to the usual functional data setting. On the one hand, they are vector-valued functions, and on the other, they are functions in an RKHS with a previously defined norm. We propose to use three different bases: the orthonormal basis given by the kernel that defines the RKHS, a basis obtained from decomposition of the integral operator defined using the covariance function, and a third basis that combines the previous two. The three approaches are compared and applied to an interesting problem: building a model to predict the fit of children’s garment sizes, based on a 3D database of the Spanish child population. Our proposal has been compared with alternative methods that explore the performance of other classifiers (Suppport Vector Machine and k-NN), and with the result of applying the classification method proposed in this work, from different characterizations of the objects (landmarks and multivariate anthropometric measurements instead of currents), obtaining in all these cases worst results

    Real-Time Cloth Simulation on Virtual Human Character Using Enhanced Position Based Dynamic Framework Technique

    Get PDF
    كانت محاكاة القماش والرسوم المتحركة موضوعًا للبحث منذ منتصف الثمانينيات في مجال رسومات الكمبيوتر. إن فرض عدم الضغط مهم جدًا في محاكاة الوقت الفعلي. على الرغم من أن هناك إنجازات كبيرة في هذا الصدد ، إلا أنها لا تزال تعاني من استهلاك الوقت غير الضروري في خطوات معينة شائعة في التطبيقات في الوقت الفعلي. يطور هذا البحث محاكاة قماش في الوقت الفعلي لشخصية بشرية افتراضية مرتدية ملابس. وقد حققت هذه المخطوطة الاهداف في محاكاة القماش على الشخصية الافتراضة من خلال تعزيز إطار الديناميكيات القائمة على الموقع من خلال حساب سلسلة من القيود الموضعية التي تنفذ كثافات ثابتة. أيضا ، يتم تنفيذ التصادم الذاتي والاصطدام مع الكبسولات المتحركة لتحقيق قماش سلوك واقعي على غرار الرسوم المتحركة. وذلك لتمكين عدم قابلية المقارنة والالتقاء مقارنة بمذيبات دالة تشوه جيب التمام عند التنفيذ ، نحقق تصادمًا محسنًا بين الملابس ، ومزامنة الرسوم المتحركة مع محاكاة القماش وتحديد خصائص القماش للحصول على أفضل النتائج الممكنة. لذلك ، تم تحقيق محاكاة القماش في الوقت الحقيقي ، مع إخراج معقول ، على الشخصية الافتراضية المتحركة. ندرك أن طريقتنا المقترحة يمكن أن تكون بمثابة استكمال للبحوث السابقة في حقل ملابس الشخصيات الافتراضية.     Cloth simulation and animation has been the topic of research since the mid-80's in the field of computer graphics. Enforcing incompressible is very important in real time simulation. Although, there are great achievements in this regard, it still suffers from unnecessary time consumption in certain steps that is common in real time applications.   This research develops a real-time cloth simulator for a virtual human character (VHC) with wearable clothing. This research achieves success in cloth simulation on the VHC through enhancing the position-based dynamics (PBD) framework by computing a series of positional constraints which implement constant densities. Also, the self-collision and collision with moving capsules is implemented to achieve realistic behavior cloth modelled on animated characters. This is to enable comparable incompressibility and convergence to raised cosine deformation (RCD) function solvers. On implementation, this research achieves optimized collision between clothes, syncing of the animation with the cloth simulation and setting the properties of the cloth to get the best results possible. Therefore, a real-time cloth simulation, with believable output, on animated VHC is achieved. This research perceives our proposed method can serve as a completion to the game assets clothing pipeline

    Learning an Intrinsic Garment Space for Interactive Authoring of Garment Animation

    Get PDF
    Authoring dynamic garment shapes for character animation on body motion is one of the fundamental steps in the CG industry. Established workflows are either time and labor consuming (i.e., manual editing on dense frames with controllers), or lack keyframe-level control (i.e., physically-based simulation). Not surprisingly, garment authoring remains a bottleneck in many production pipelines. Instead, we present a deep-learning-based approach for semi-automatic authoring of garment animation, wherein the user provides the desired garment shape in a selection of keyframes, while our system infers a latent representation for its motion-independent intrinsic parameters (e.g., gravity, cloth materials, etc.). Given new character motions, the latent representation allows to automatically generate a plausible garment animation at interactive rates. Having factored out character motion, the learned intrinsic garment space enables smooth transition between keyframes on a new motion sequence. Technically, we learn an intrinsic garment space with an motion-driven autoencoder network, where the encoder maps the garment shapes to the intrinsic space under the condition of body motions, while the decoder acts as a differentiable simulator to generate garment shapes according to changes in character body motion and intrinsic parameters. We evaluate our approach qualitatively and quantitatively on common garment types. Experiments demonstrate our system can significantly improve current garment authoring workflows via an interactive user interface. Compared with the standard CG pipeline, our system significantly reduces the ratio of required keyframes from 20% to 1 -- 2%

    Motion Guided Deep Dynamic 3D Garments

    Full text link
    Realistic dynamic garments on animated characters have many AR/VR applications. While authoring such dynamic garment geometry is still a challenging task, data-driven simulation provides an attractive alternative, especially if it can be controlled simply using the motion of the underlying character. In this work, we focus on motion guided dynamic 3D garments, especially for loose garments. In a data-driven setup, we first learn a generative space of plausible garment geometries. Then, we learn a mapping to this space to capture the motion dependent dynamic deformations, conditioned on the previous state of the garment as well as its relative position with respect to the underlying body. Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space. We resolve any remaining per-frame collisions by predicting residual local displacements. The resultant garment geometry is used as history to enable iterative rollout prediction. We demonstrate plausible generalization to unseen body shapes and motion inputs, and show improvements over multiple state-of-the-art alternatives.Comment: 11 page
    corecore