3,388 research outputs found

    Intelligent computational sketching support for conceptual design

    Get PDF
    Sketches, with their flexibility and suggestiveness, are in many ways ideal for expressing emerging design concepts. This can be seen from the fact that the process of representing early designs by free-hand drawings was used as far back as in the early 15th century [1]. On the other hand, CAD systems have become widely accepted as an essential design tool in recent years, not least because they provide a base on which design analysis can be carried out. Efficient transfer of sketches into a CAD representation, therefore, is a powerful addition to the designers' armoury.It has been pointed out by many that a pen-on-paper system is the best tool for sketching. One of the crucial requirements of a computer aided sketching system is its ability to recognise and interpret the elements of sketches. 'Sketch recognition', as it has come to be known, has been widely studied by people working in such fields: as artificial intelligence to human-computer interaction and robotic vision. Despite the continuing efforts to solve the problem of appropriate conceptual design modelling, it is difficult to achieve completely accurate recognition of sketches because usually sketches implicate vague information, and the idiosyncratic expression and understanding differ from each designer

    Deep3DSketch+: Obtaining Customized 3D Model by Single Free-Hand Sketch through Deep Learning

    Full text link
    As 3D models become critical in today's manufacturing and product design, conventional 3D modeling approaches based on Computer-Aided Design (CAD) are labor-intensive, time-consuming, and have high demands on the creators. This work aims to introduce an alternative approach to 3D modeling by utilizing free-hand sketches to obtain desired 3D models. We introduce Deep3DSketch+, which is a deep-learning algorithm that takes the input of a single free-hand sketch and produces a complete and high-fidelity model that matches the sketch input. The neural network has view- and structural-awareness enabled by a Shape Discriminator (SD) and a Stroke Enhancement Module (SEM), which overcomes the limitations of sparsity and ambiguity of the sketches. The network design also brings high robustness to partial sketch input in industrial applications.Our approach has undergone extensive experiments, demonstrating its state-of-the-art (SOTA) performance on both synthetic and real-world datasets. These results validate the effectiveness and superiority of our method compared to existing techniques. We have demonstrated the conversion of free-hand sketches into physical 3D objects using additive manufacturing. We believe that our approach has the potential to accelerate product design and democratize customized manufacturing

    Deep3DSketch+\+: High-Fidelity 3D Modeling from Single Free-hand Sketches

    Full text link
    The rise of AR/VR has led to an increased demand for 3D content. However, the traditional method of creating 3D content using Computer-Aided Design (CAD) is a labor-intensive and skill-demanding process, making it difficult to use for novice users. Sketch-based 3D modeling provides a promising solution by leveraging the intuitive nature of human-computer interaction. However, generating high-quality content that accurately reflects the creator's ideas can be challenging due to the sparsity and ambiguity of sketches. Furthermore, novice users often find it challenging to create accurate drawings from multiple perspectives or follow step-by-step instructions in existing methods. To address this, we introduce a groundbreaking end-to-end approach in our work, enabling 3D modeling from a single free-hand sketch, Deep3DSketch+\\backslash+. The issue of sparsity and ambiguity using single sketch is resolved in our approach by leveraging the symmetry prior and structural-aware shape discriminator. We conducted comprehensive experiments on diverse datasets, including both synthetic and real data, to validate the efficacy of our approach and demonstrate its state-of-the-art (SOTA) performance. Users are also more satisfied with results generated by our approach according to our user study. We believe our approach has the potential to revolutionize the process of 3D modeling by offering an intuitive and easy-to-use solution for novice users.Comment: Accepted at IEEE SMC 202

    User-adaptive sketch-based 3D CAD model retrieval

    Get PDF
    3D CAD models are an important digital resource in the manufacturing industry. 3D CAD model retrieval has become a key technology in product lifecycle management enabling the reuse of existing design data. In this paper, we propose a new method to retrieve 3D CAD models based on 2D pen-based sketch inputs. Sketching is a common and convenient method for communicating design intent during early stages of product design, e.g., conceptual design. However, converting sketched information into precise 3D engineering models is cumbersome, and much of this effort can be avoided by reuse of existing data. To achieve this purpose, we present a user-adaptive sketch-based retrieval method in this paper. The contributions of this work are twofold. Firstly, we propose a statistical measure for CAD model retrieval: the measure is based on sketch similarity and accounts for users’ drawing habits. Secondly, for 3D CAD models in the database, we propose a sketch generation pipeline that represents each 3D CAD model by a small yet sufficient set of sketches that are perceptually similar to human drawings. User studies and experiments that demonstrate the effectiveness of the proposed method in the design process are presented

    A Survey of 2D and 3D Shape Descriptors

    Get PDF

    Sketch-based interaction and modeling: where do we stand?

    Get PDF
    Sketching is a natural and intuitive communication tool used for expressing concepts or ideas which are difficult to communicate through text or speech alone. Sketching is therefore used for a variety of purposes, from the expression of ideas on two-dimensional (2D) physical media, to object creation, manipulation, or deformation in three-dimensional (3D) immersive environments. This variety in sketching activities brings about a range of technologies which, while having similar scope, namely that of recording and interpreting the sketch gesture to effect some interaction, adopt different interpretation approaches according to the environment in which the sketch is drawn. In fields such as product design, sketches are drawn at various stages of the design process, and therefore, designers would benefit from sketch interpretation technologies which support these differing interactions. However, research typically focuses on one aspect of sketch interpretation and modeling such that literature on available technologies is fragmented and dispersed. In this paper, we bring together the relevant literature describing technologies which can support the product design industry, namely technologies which support the interpretation of sketches drawn on 2D media, sketch-based search interactions, as well as sketch gestures drawn in 3D media. This paper, therefore, gives a holistic view of the algorithmic support that can be provided in the design process. In so doing, we highlight the research gaps and future research directions required to provide full sketch-based interaction support

    Revisiting the design intent concept in the context of mechanical CAD education

    Get PDF
    [EN] Design intent is generally understood simply as a CAD modelÂżs anticipated behavior when altered. However, this representation provides a simplified view of the modelÂżs construction and purpose, which may hinder its general understanding and future reusability. Our vision is that design intent communication may be improved by recognizing the multifaceted nature of design intent, and by instructing users to convey each facet of design intent through the better-fitted CAD resource. This paper reviews the current understanding of design intent and its relationship to design rationale and builds on the idea that communication of design intent conveyed via CAD models can be satisfied at three levels provided that specialized instruction is used to instruct users in selection of the most suitable level for each intent.Otey, J.; Company, P.; Contero, M.; Camba, J. (2018). Revisiting the design intent concept in the context of mechanical CAD education. Computer-Aided Design and Applications. 15(1):47-60. https://doi.org/10.1080/16864360.2017.1353733S476015

    Semantizing Complex 3D Scenes using Constrained Attribute Grammars

    Get PDF
    International audienceWe propose a new approach to automatically semantize complex objects in a 3D scene. For this, we define an expressive formalism combining the power of both attribute grammars and constraint. It offers a practical conceptual interface, which is crucial to write large maintainable specifications. As recursion is inadequate to express large collections of items, we introduce maximal operators, that are essential to reduce the parsing search space. Given a grammar in this formalism and a 3D scene, we show how to automatically compute a shared parse forest of all interpretations -- in practice, only a few, thanks to relevant constraints. We evaluate this technique for building model semantization using CAD model examples as well as photogrammetric and simulated LiDAR data
    • …
    corecore