2,201 research outputs found

    Deep Learning for Free-Hand Sketch: A Survey

    Get PDF
    Free-hand sketches are highly illustrative, and have been widely used by humans to depict objects or stories from ancient times to the present. The recent prevalence of touchscreen devices has made sketch creation a much easier task than ever and consequently made sketch-oriented applications increasingly popular. The progress of deep learning has immensely benefited free-hand sketch research and applications. This paper presents a comprehensive survey of the deep learning techniques oriented at free-hand sketch data, and the applications that they enable. The main contents of this survey include: (i) A discussion of the intrinsic traits and unique challenges of free-hand sketch, to highlight the essential differences between sketch data and other data modalities, e.g., natural photos. (ii) A review of the developments of free-hand sketch research in the deep learning era, by surveying existing datasets, research topics, and the state-of-the-art methods through a detailed taxonomy and experimental evaluation. (iii) Promotion of future work via a discussion of bottlenecks, open problems, and potential research directions for the community.Comment: This paper is accepted by IEEE TPAM

    Freeform User Interfaces for Graphical Computing

    Get PDF
    報告番号: 甲15222 ; 学位授与年月日: 2000-03-29 ; 学位の種別: 課程博士 ; 学位の種類: 博士(工学) ; 学位記番号: 博工第4717号 ; 研究科・専攻: 工学系研究科情報工学専

    A new paradigm based on agents applied to free-hand sketch recognition

    Get PDF
    Important advances in natural calligraphic interfaces for CAD (Computer Aided Design) applications are being achieved, enabling the development of CAS (Computer Aided Sketching) devices that allow facing up to the conceptual design phase of a product. Recognizers play an important role in this field, allowing the interpretation of the user’s intention, but they still present some important lacks. This paper proposes a new recognition paradigm using an agent-based architecture that does not depend on the drawing sequence and takes context information into account to help decisions. Another improvement is the absence of operation modes, that is, no button is needed to distinguish geometry from symbols or gestures, and also “interspersing” and “overtracing” are accomplishedThe Spanish Ministry of Science and Education and the FEDER Funds, through the CUESKETCH project (Ref. DPI2007-66755-C02-01), partially supported this work.Fernández Pacheco, D.; Albert Gil, FE.; Aleixos Borrás, MN.; Conesa Pastor, J. (2012). A new paradigm based on agents applied to free-hand sketch recognition. Expert Systems with Applications. 39(8):7181-7195. https://doi.org/10.1016/j.eswa.2012.01.063S7181719539

    Deep3DSketch+: Obtaining Customized 3D Model by Single Free-Hand Sketch through Deep Learning

    Full text link
    As 3D models become critical in today's manufacturing and product design, conventional 3D modeling approaches based on Computer-Aided Design (CAD) are labor-intensive, time-consuming, and have high demands on the creators. This work aims to introduce an alternative approach to 3D modeling by utilizing free-hand sketches to obtain desired 3D models. We introduce Deep3DSketch+, which is a deep-learning algorithm that takes the input of a single free-hand sketch and produces a complete and high-fidelity model that matches the sketch input. The neural network has view- and structural-awareness enabled by a Shape Discriminator (SD) and a Stroke Enhancement Module (SEM), which overcomes the limitations of sparsity and ambiguity of the sketches. The network design also brings high robustness to partial sketch input in industrial applications.Our approach has undergone extensive experiments, demonstrating its state-of-the-art (SOTA) performance on both synthetic and real-world datasets. These results validate the effectiveness and superiority of our method compared to existing techniques. We have demonstrated the conversion of free-hand sketches into physical 3D objects using additive manufacturing. We believe that our approach has the potential to accelerate product design and democratize customized manufacturing

    To Draw or Not to Draw: Recognizing Stroke-Hover Intent in Gesture-Free Bare-Hand Mid-Air Drawing Tasks

    Get PDF
    Over the past several decades, technological advancements have introduced new modes of communication with the computers, introducing a shift from traditional mouse and keyboard interfaces. While touch based interactions are abundantly being used today, latest developments in computer vision, body tracking stereo cameras, and augmented and virtual reality have now enabled communicating with the computers using spatial input in the physical 3D space. These techniques are now being integrated into several design critical tasks like sketching, modeling, etc. through sophisticated methodologies and use of specialized instrumented devices. One of the prime challenges in design research is to make this spatial interaction with the computer as intuitive as possible for the users. Drawing curves in mid-air with fingers, is a fundamental task with applications to 3D sketching, geometric modeling, handwriting recognition, and authentication. Sketching in general, is a crucial mode for effective idea communication between designers. Mid-air curve input is typically accomplished through instrumented controllers, specific hand postures, or pre-defined hand gestures, in presence of depth and motion sensing cameras. The user may use any of these modalities to express the intention to start or stop sketching. However, apart from suffering with issues like lack of robustness, the use of such gestures, specific postures, or the necessity of instrumented controllers for design specific tasks further result in an additional cognitive load on the user. To address the problems associated with different mid-air curve input modalities, the presented research discusses the design, development, and evaluation of data driven models for intent recognition in non-instrumented, gesture-free, bare-hand mid-air drawing tasks. The research is motivated by a behavioral study that demonstrates the need for such an approach due to the lack of robustness and intuitiveness while using hand postures and instrumented devices. The main objective is to study how users move during mid-air sketching, develop qualitative insights regarding such movements, and consequently implement a computational approach to determine when the user intends to draw in mid-air without the use of an explicit mechanism (such as an instrumented controller or a specified hand-posture). By recording the user’s hand trajectory, the idea is to simply classify this point as either hover or stroke. The resulting model allows for the classification of points on the user’s spatial trajectory. Drawing inspiration from the way users sketch in mid-air, this research first specifies the necessity for an alternate approach for processing bare hand mid-air curves in a continuous fashion. Further, this research presents a novel drawing intent recognition work flow for every recorded drawing point, using three different approaches. We begin with recording mid-air drawing data and developing a classification model based on the extracted geometric properties of the recorded data. The main goal behind developing this model is to identify drawing intent from critical geometric and temporal features. In the second approach, we explore the variations in prediction quality of the model by improving the dimensionality of data used as mid-air curve input. Finally, in the third approach, we seek to understand the drawing intention from mid-air curves using sophisticated dimensionality reduction neural networks such as autoencoders. Finally, the broad level implications of this research are discussed, with potential development areas in the design and research of mid-air interactions
    corecore