1,204 research outputs found

    Digital 3D Smocking Design

    Full text link
    We develop an optimization-based method to model smocking, a surface embroidery technique that provides decorative geometric texturing while maintaining stretch properties of the fabric. During smocking, multiple pairs of points on the fabric are stitched together, creating non-manifold geometric features and visually pleasing textures. Designing smocking patterns is challenging, because the outcome of stitching is unpredictable: the final texture is often revealed only when the whole smocking process is completed, necessitating painstaking physical fabrication and time consuming trial-and-error experimentation. This motivates us to seek a digital smocking design method. Straightforward attempts to compute smocked fabric geometry using surface deformation or cloth simulation methods fail to produce realistic results, likely due to the intricate structure of the designs, the large number of contacts and high-curvature folds. We instead formulate smocking as a graph embedding and shape deformation problem. We extract a coarse graph representing the fabric and the stitching constraints, and then derive the graph structure of the smocked result. We solve for the 3D embedding of this graph, which in turn reliably guides the deformation of the high-resolution fabric mesh. Our optimization based method is simple, efficient, and flexible, which allows us to build an interactive system for smocking pattern exploration. To demonstrate the accuracy of our method, we compare our results to real fabrications on a large set of smocking patternsComment: 17 pages, 35 figure

    Motion Guided Deep Dynamic 3D Garments

    Full text link
    Realistic dynamic garments on animated characters have many AR/VR applications. While authoring such dynamic garment geometry is still a challenging task, data-driven simulation provides an attractive alternative, especially if it can be controlled simply using the motion of the underlying character. In this work, we focus on motion guided dynamic 3D garments, especially for loose garments. In a data-driven setup, we first learn a generative space of plausible garment geometries. Then, we learn a mapping to this space to capture the motion dependent dynamic deformations, conditioned on the previous state of the garment as well as its relative position with respect to the underlying body. Technically, we model garment dynamics, driven using the input character motion, by predicting per-frame local displacements in a canonical state of the garment that is enriched with frame-dependent skinning weights to bring the garment to the global space. We resolve any remaining per-frame collisions by predicting residual local displacements. The resultant garment geometry is used as history to enable iterative rollout prediction. We demonstrate plausible generalization to unseen body shapes and motion inputs, and show improvements over multiple state-of-the-art alternatives.Comment: 11 page

    f-SfT

    Get PDF

    Wire mesh design

    Get PDF
    We present a computational approach for designing wire meshes, i.e., freeform surfaces composed of woven wires arranged in a regular grid. To facilitate shape exploration, we map material properties of wire meshes to the geometric model of Chebyshev nets. This abstraction is exploited to build an efficient optimization scheme. While the theory of Chebyshev nets suggests a highly constrained design space, we show that allowing controlled deviations from the underlying surface provides a rich shape space for design exploration. Our algorithm balances globally coupled material constraints with aesthetic and geometric design objectives that can be specified by the user in an interactive design session. In addition to sculptural art, wire meshes represent an innovative medium for industrial applications including composite materials and architectural façades. We demonstrate the effectiveness of our approach using a variety of digital and physical prototypes with a level of shape complexity unobtainable using previous methods

    Beyond developable: computational design and fabrication with auxetic materials

    Get PDF
    We present a computational method for interactive 3D design and rationalization of surfaces via auxetic materials, i.e., flat flexible material that can stretch uniformly up to a certain extent. A key motivation for studying such material is that one can approximate doubly-curved surfaces (such as the sphere) using only flat pieces, making it attractive for fabrication. We physically realize surfaces by introducing cuts into approximately inextensible material such as sheet metal, plastic, or leather. The cutting pattern is modeled as a regular triangular linkage that yields hexagonal openings of spatially-varying radius when stretched. In the same way that isometry is fundamental to modeling developable surfaces, we leverage conformal geometry to understand auxetic design. In particular, we compute a global conformal map with bounded scale factor to initialize an otherwise intractable non-linear optimization. We demonstrate that this global approach can handle non-trivial topology and non-local dependencies inherent in auxetic material. Design studies and physical prototypes are used to illustrate a wide range of possible applications

    Asynchronous Variational Contact Mechanics

    Full text link
    An asynchronous, variational method for simulating elastica in complex contact and impact scenarios is developed. Asynchronous Variational Integrators (AVIs) are extended to handle contact forces by associating different time steps to forces instead of to spatial elements. By discretizing a barrier potential by an infinite sum of nested quadratic potentials, these extended AVIs are used to resolve contact while obeying momentum- and energy-conservation laws. A series of two- and three-dimensional examples illustrate the robustness and good energy behavior of the method

    Physically Interacting With Four Dimensions

    Get PDF
    Thesis (Ph.D.) - Indiana University, Computer Sciences, 2009People have long been fascinated with understanding the fourth dimension. While making pictures of 4D objects by projecting them to 3D can help reveal basic geometric features, 3D graphics images by themselves are of limited value. For example, just as 2D shadows of 3D curves may have lines crossing one another in the shadow, 3D graphics projections of smooth 4D topological surfaces can be interrupted where one surface intersects another. The research presented here creates physically realistic models for simple interactions with objects and materials in a virtual 4D world. We provide methods for the construction, multimodal exploration, and interactive manipulation of a wide variety of 4D objects. One basic achievement of this research is to exploit the free motion of a computer-based haptic probe to support a continuous motion that follows the \emph{local continuity\/} of a 4D surface, allowing collision-free exploration in the 3D projection. In 3D, this interactive probe follows the full local continuity of the surface as though we were in fact \emph{physically touching\/} the actual static 4D object. Our next contribution is to support dynamic 4D objects that can move, deform, and collide with other objects as well as with themselves. By combining graphics, haptics, and collision-sensing physical modeling, we can thus enhance our 4D visualization experience. Since we cannot actually place interaction devices in 4D, we develop fluid methods for interacting with a 4D object in its 3D shadow image using adapted reduced-dimension 3D tools for manipulating objects embedded in 4D. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D interactive or haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the real-world experience accessible to human beings

    A deformation transformer for real-time cloth animation

    Get PDF
    Achieving interactive performance in cloth animation has significant implications in computer games and other interactive graphics applications. Although much progress has been made, it is still much desired to have real-time high-quality results that well preserve dynamic folds and wrinkles. In this paper, we introduce a hybrid method for real-time cloth animation. It relies on datadriven models to capture the relationship between cloth deformations at two resolutions. Such data-driven models are responsible for transforming low-quality simulated deformations at the low resolution into high-resolution cloth deformations with dynamically introduced fine details. Our data-driven transformation is trained using rotation invariant quantities extracted from the cloth models, and is independent of the simulation technique chosen for the lower resolution model. We have also developed a fast collision detection and handling scheme based on dynamically transformed bounding volumes. All the components in our algorithm can be efficiently implemented on programmable graphics hardware to achieve an overall real-time performance on high-resolution cloth models. © 2010 ACM.postprin
    • …
    corecore