10 research outputs found

    DupRobo: An Interactive Robotic Platform for Physical Block-Based Autocompletion

    Get PDF
    In this paper, we present DupRobo, an interactive robotic platform for tangible block-based design and construction. DupRobo supported user-customisable exemplar, repetition control, and tangible autocompletion, through the computer-vision and the robotic techniques. With DupRobo, we aim to reduce users’ workload in repetitive block-based construction, yet preserve the direct manipulatability and the intuitiveness in tangible model design, such as product design and architecture design

    HairBrush for Immersive Data-Driven Hair Modeling

    Get PDF
    International audienceWhile hair is an essential component of virtual humans, it is also one of the most challenging digital assets to create. Existing automatic techniques lack the generality and flexibility to create rich hair variations, while manual authoring interfaces often require considerable artistic skills and efforts, especially for intricate 3D hair structures that can be difficult to navigate. We propose an interactive hair modeling system that can help create complex hairstyles in minutes or hours that would otherwise take much longer with existing tools. Modelers, including novice users, can focus on the overall hairstyles and local hair deformations, as our system intelligently suggests the desired hair parts. Our method combines the flexibility of manual authoring and the convenience of data-driven automation. Since hair contains intricate 3D structures such as buns, knots, and strands, they are inherently challenging to create using traditional 2D interfaces. Our system provides a new 3D hair author-ing interface for immersive interaction in virtual reality (VR). Users can draw high-level guide strips, from which our system predicts the most plausible hairstyles via a deep neural network trained from a professionally curated dataset. Each hairstyle in our dataset is composed of multiple variations, serving as blend-shapes to fit the user drawings via global blending and local deformation. The fitted hair models are visualized as interactive suggestions that the user can select, modify, or ignore. We conducted a user study to confirm that our system can significantly reduce manual labor while improve the output quality for modeling a variety of head and facial hairstyles that are challenging to create via existing techniques

    Free-form Shape Modeling in XR: A Systematic Review

    Full text link
    Shape modeling research in Computer Graphics has been an active area for decades. The ability to create and edit complex 3D shapes has been of key importance in Computer-Aided Design, Animation, Architecture, and Entertainment. With the growing popularity of Virtual and Augmented Reality, new applications and tools have been developed for artistic content creation; real-time interactive shape modeling has become increasingly important for a continuum of virtual and augmented reality environments (eXtended Reality (XR)). Shape modeling in XR opens new possibilities for intuitive design and shape modeling in an accessible way. Artificial Intelligence (AI) approaches generating shape information from text prompts are set to change how artists create and edit 3D models. There has been a substantial body of research on interactive 3D shape modeling. However, there is no recent extensive review of the existing techniques and what AI shape generation means for shape modeling in interactive XR environments. In this state-of-the-art paper, we fill this research gap in the literature by surveying free-form shape modeling work in XR, with a focus on sculpting and 3D sketching, the most intuitive forms of free-form shape modeling. We classify and discuss these works across five dimensions: contribution of the articles, domain setting, interaction tool, auto-completion, and collaborative designing. The paper concludes by discussing the disconnect between interactive 3D sculpting and sketching and how this will likely evolve with the prevalence of AI shape-generation tools in the future

    Sketch2CAD: Sequential CAD Modeling by Sketching in Context

    Get PDF
    International audienceWe present a sketch-based CAD modeling system, where users create objects incrementally by sketching the desired shape edits, which our system automatically translates to CAD operations. Our approach is motivated by the close similarities between the steps industrial designers follow to draw 3D shapes, and the operations CAD modeling systems offer to create similar shapes. To overcome the strong ambiguity with parsing 2D sketches, we observe that in a sketching sequence, each step makes sense and can be interpreted in the context of what has been drawn before. In our system, this context corresponds to a partial CAD model, inferred in the previous steps, which we feed along with the input sketch to a deep neural network in charge of interpreting how the model should be modified by that sketch. Our deep network architecture then recognizes the intended CAD operation and segments the sketch accordingly, such that a subsequent optimization estimates the parameters of the operation that best fit the segmented sketch strokes. Since there exists no datasets of paired sketching and CAD mod-eling sequences, we train our system by generating synthetic sequences of CAD operations that we render as line drawings. We present a proof of concept realization of our algorithm supporting four frequently used CAD operations. Using our system, participants are able to quickly model a large and diverse set of objects, demonstrating Sketch2CAD to be an alternate way of interacting with current CAD modeling systems

    Autocomplete element fields and interactive synthesis system development for aggregate applications.

    Get PDF
    Aggregate elements are ubiquitous in natural and man-made objects and have played an important role in the application of graphics, design and visualization. However, to efficiently arrange these aggregate elements with varying anisotropy and deformability still remains challenging, in particular in 3D environments. To overcome such a thorny issue, we thus introduce autocomplete element fields, including an element distribution formulation that can effectively cope with diverse output compositions with controllable element distributions in high production standard and efficiency as well as an element field formulation that can smoothly orient all the synthesized elements following given inputs, such as scalar or direction fields. The pro- posed formulations can not only properly synthesize distinct types of aggregate elements across various domain spaces without incorporating any extra process but also directly compute complete element fields from partial specifications without requiring fully specified inputs in any algorithmic step. Furthermore, in order to reduce input workload and enhance output quality for better usability and interactivity, we further develop an interactive synthesis system, centered on the idea of our autocomplete element fields, to facilitate the creation of element aggregations within different output do- mains. Analogous to conventional painting workflows, through a palette- based brushing interface, users can interactively mix and place a few aggregate elements over a brushing canvas and let our system automatically populate more aggregate elements with intended orientations and scales for the rest of outcome. The developed system can empower the users to iteratively design a variety of novel mixtures with reduced workload and enhanced quality under an intuitive and user-friendly brushing workflow with- out the necessity of a great deal of manual labor or technical expertise. We validate our prototype system with a pilot user study and exhibit its application in 2D graphic design, 3D surface collage, and 3D aggregate modeling

    Making Up the 3D Body: Designing for Artistic and Serendipitous Interaction in Modelling Digital Human Figures

    Get PDF
    Making Up the 3D Body: Designing for Artistic and Serendipitous Interaction in Modelling Digital Human Figures details the process of developing a new software tool for digital artistic exploration. Previously available software for modelling mesh-based 3D human figures restricts user output based on normative assumptions about the form that a body might take, particularly in terms of gender, race, and disability status, which are reinforced by ubiquitous use of range-limited sliders mapped to singular high-level design parameters. CreatorCustom, the software tool created during this research, is designed to foreground an exploratory and open-ended approach to modelling 3D human bodies, treating the digital body as a sculptural landscape rather than a pre-supposed form for rote technical representation. Building on the foundation of prior research into serendipity in Human-Computer Interaction, creativity support tools, 3D modelling systems for users at various levels of proficiency, and usability, among others, this research takes the form of two qualitative studies and an autoethnography of the author’s artistic practice. The first study explores the practices of six queer artists working with the body and the language, materials, and actions they use in their practice, as described in interview and structured material practice sessions, which were then applied toward the design of the software tool. The second study deals with the usability, creativity support, and bodily implications and outcomes of the software tool when used by thirteen artist participants in a workshop setting. Reflecting on the relationship between affect and usability, and surprises and the unexpected in creative technology and artistic practice, these strands are brought together in an analysis and discussion of the author’s experience of using the software tool to create her own artistic work dealing with gender and sexuality
    corecore