1,709 research outputs found
DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling
Face modeling has been paid much attention in the field of visual computing.
There exist many scenarios, including cartoon characters, avatars for social
media, 3D face caricatures as well as face-related art and design, where
low-cost interactive face modeling is a popular approach especially among
amateur users. In this paper, we propose a deep learning based sketching system
for 3D face and caricature modeling. This system has a labor-efficient
sketching interface, that allows the user to draw freehand imprecise yet
expressive 2D lines representing the contours of facial features. A novel CNN
based deep regression network is designed for inferring 3D face models from 2D
sketches. Our network fuses both CNN and shape based features of the input
sketch, and has two independent branches of fully connected layers generating
independent subsets of coefficients for a bilinear face representation. Our
system also supports gesture based interactions for users to further manipulate
initial face models. Both user studies and numerical results indicate that our
sketching system can help users create face models quickly and effectively. A
significantly expanded face database with diverse identities, expressions and
levels of exaggeration is constructed to promote further research and
evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201
Stereoscopic Sketchpad: 3D Digital Ink
--Context--
This project looked at the development of a stereoscopic 3D environment in which a user is able to draw freely in all three dimensions. The main focus was on the storage and manipulation of the ‘digital ink’ with which the user draws. For a drawing and sketching package to be effective it must not only have an easy to use user interface, it must be able to handle all input data quickly and efficiently so that the user is able to focus fully on their drawing.
--Background--
When it comes to sketching in three dimensions the majority of applications currently available rely on vector based drawing methods. This is primarily because the applications are designed to take a users two dimensional input and transform this into a three dimensional model. Having the sketch represented as vectors makes it simpler for
the program to act upon its geometry and thus convert it to a model. There are a number of methods to achieve this aim including Gesture Based Modelling, Reconstruction and Blobby Inflation. Other vector based applications focus on the creation of curves allowing the user to draw within or on existing 3D models. They also allow the user to create wire frame type models. These stroke based applications bring the user closer to traditional sketching rather than the more structured modelling methods detailed.
While at present the field is inundated with vector based applications mainly focused upon sketch-based modelling there are significantly less voxel based applications. The majority of these applications focus on the deformation and sculpting of voxmaps, almost the opposite of drawing and sketching, and the creation of three dimensional voxmaps from standard two dimensional pixmaps. How to actually sketch freely within a scene represented by a voxmap has rarely been explored. This comes as a surprise when so many of the standard 2D drawing programs in use today are pixel based.
--Method--
As part of this project a simple three dimensional drawing program was designed and implemented using C and C++. This tool is known as Sketch3D and was created using a Model View Controller (MVC) architecture. Due to the modular nature of Sketch3Ds system architecture it is possible to plug a range of different data structures into the program to represent the ink in a variety of ways. A series of data structures have been implemented and were tested for efficiency. These structures were a simple list, a 3D array, and an octree. They have been tested for: the time it takes to insert or remove points from the structure; how easy it is to manipulate points once they are stored; and also how the number of points stored effects the draw and rendering times.
One of the key issues brought up by this project was devising a means by which a user is able to draw in three dimensions while using only two dimensional input devices. The method settled upon and implemented involves using the mouse or a digital pen to sketch as one would in a standard 2D drawing package but also linking the up and down keyboard keys to the current depth. This allows the user to move in and out of the scene as they draw. A couple of user interface tools were also developed to assist the user. A 3D cursor was implemented and also a toggle, which when on, highlights all of the points intersecting the depth plane on which the cursor currently resides. These tools allow the user to see exactly where they are drawing in relation to previously drawn lines.
--Results--
The tests conducted on the data structures clearly revealed that the octree was the most effective data structure. While not the most efficient in every area, it manages to avoid the major pitfalls of the other structures. The list was extremely quick to render and draw to the screen but suffered severely when it comes to finding and manipulating points already stored. In contrast the three dimensional array was able to erase or manipulate points effectively while the draw time rendered the structure effectively useless, taking huge amounts of time to draw each frame.
The focus of this research was on how a 3D sketching package would go about storing
and accessing the digital ink. This is just a basis for further research in this area and many
issues touched upon in this paper will require a more in depth analysis. The primary area of
this future research would be the creation of an effective user interface and the introduction
of regular sketching package features such as the saving and loading of images
Building geometric models with hand-drawn sketches
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1998.Includes bibliographical references (p. 49-51).Architects work on drawings and models, not buildings. Today, in many architectural practices, drawings and models are produced in digital format using Computer-aided Design (CAD) tools. Unquestionably, digital media have changed the way in which many architects perform their day to day activities. But these changes have been limited to the more prosaic aspects of practice. To be sure, CAD systems have made the daily operations of many design offices more efficient; nevertheless, they have been of little use - and indeed are often a hindrance - in situations where the task at hand is more conjectural and speculative in nature, as it is during the early stages of a project. Well-intentioned efforts to insinuate CAD into these aspects of practice have only served to reveal the incongruities between the demands of designer and the configuration of the available tools. One of the chief attributes of design practice is that it is action performed at a distance through the agency of representations. This fundamental trait implies that we have to understand how computers help architects describe buildings if we are to understand how they might help architects design buildings. As obvious as this claim might seem, CAD programs can be almost universally characterized by a tacit denigration of visual representation. In this thesis, I examine properties of design drawings that make them useful to architects. I go on to describe a computer program that I have written that allows a designer to build geometric models using freehand sketches. This program illustrates that it is possible to design a software tool in a way that profits from, rather than negates, the power of visual representations.by Ewan E. Branda.M.S
A cameraphone-based approach for the generation of 3D models from paper sketches
Parts of the research work disclosed in this paper are subject to a pending patent application number 2130.Due to the advantages it offers, a sketch-based user-interface (UI) has been utilised in various domains, such as 3D modelling, 'graphical user-interface' design, 3D animation of cartoon characters, etc. However, its benefits have not yet been adequately exploited with those of a mobile phone, despite that the latter is nowadays a widely used wireless handheld device for mobile communication. Given this scenario, this paper discloses a novel approach of using a paper sketch-based UI, which combines the benefits of paper sketching and those of a cameraphone (a mobile phone with an integrated camera), in the domain of early form design modelling. More specifically, the framework disclosed and evaluated in this paper, enables users to remotely obtain visual representations of 3D geometric models from freehand sketches by combining the portability of paper with that of cameraphones. Based on this framework, a prototype tool has been implemented and evaluated. Despite the limitations of the current prototype tool, the evaluation results of the framework s underlying concepts and of the prototype tool collectively indicate that the idea disclosed in this paper contributes in providing users with a mobile sketch-based interface, which can also be used in other domains, beyond early form design modelling.peer-reviewe
Recommended from our members
Integration of sketch-based ideation and 3D modeling with CAD systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis is concerned with the study of how sketch-based systems can be improved to enhance idea generation process in conceptual design stage. It is also concerned with achieving a kind of integration between sketch-based systems and CAD systems to complete the digitization of the design process as sketching phase is still not integrated with other phases due to the different nature of it and the incomplete digitization of sketching phase itself. Previous studies identified three main related issues: sketching process, sketch-based modeling, and the integration between the digitized design phases. Here, the thesis is motivated from the desire to improve sketch-based modeling to support idea generation process but unlike previous studies that only focused on the technical or drawing part of sketching, this thesis attempts to concentrate more on the mental part of the sketching process which play a key role in developing ideas in design. Another motivation of this thesis is to produce a kind of integration between sketch-based systems and CAD systems to enable 3D models produced by sketching to be edited in detailed design stage. As such, there are two main contributions have been addressed in this thesis. The first contribution is the presenting of a new approach in designing
sketch-based systems that enable more support for idea generation by separating thinking and developing ideas from the 3D modeling process. This kind of separation allows designers to think freely and concentrate more on their ideas rather than 3D modeling. the second contribution is achieving a kind of integration between gesture-based systems and CAD systems by using an IGES file in exchanging data between systems and a new method to organize data within the file in an order that make it more understood by feature recognition embedded in commercial CAD systems.This study is funded by the Ministry of Higher Education of Egypt
Enhancing Freehand Sketching in Industrial Design: Description and Implementation of a Drawing Methodology for More Effective Representations
Freehand sketching is an important part of the design process that allows one to communicate in a quick and gestural way the first ideas about new concepts and is a medium for graphic thinking. It is important for architects and designers because it is a mechanism of representation, conceptualization, and abstraction for the communication between the creators and their audience. All academic courses related to industrial design include subjects aimed at acquiring skills in the use of manual tools of graphic representation, recognizing their importance in the integral training of the designer. However, sometimes the methodologies implemented in some subjects fail to develop adequately the skills of the students, who finish their studies with shortcomings in the field of graphic representation. This paper describes exercises that are part of a methodology designed to help students of industrial design acquire the skills to make an agile and effective use of freehand sketching. Through different uses of the elements of formal expression, the exercises address topics such as shape analysis, composition, light, color, and descriptive illustration. The methodology is applied experimentally in a subject of the bachelor’s degree in industrial design and product development engineering at Universitat Jaume I, introducing the students to different instruments and techniques of sketching and proposing various enriching ways of direct observation of the objectual reality that surrounds them. The paper concludes by evaluating the positive impact of the implemented methodology
Application of Machine Learning within Visual Content Production
We are living in an era where digital content is being produced at a dazzling pace. The heterogeneity of contents and contexts is so varied that a numerous amount of applications have been created to respond to people and market demands. The visual content production pipeline is the generalisation of the process that allows a content editor to create and evaluate their product, such as a video, an image, a 3D model, etc. Such data is then displayed on one or more devices such as TVs, PC monitors, virtual reality head-mounted displays, tablets, mobiles, or even smartwatches. Content creation can be simple as clicking a button to film a video and then share it into a social network, or complex as managing a dense user interface full of parameters by using keyboard and mouse to generate a realistic 3D model for a VR game. In this second example, such sophistication results in a steep learning curve for beginner-level users. In contrast, expert users regularly need to refine their skills via expensive lessons, time-consuming tutorials, or experience. Thus, user interaction plays an essential role in the diffusion of content creation software, primarily when it is targeted to untrained people. In particular, with the fast spread of virtual reality devices into the consumer market, new opportunities for designing reliable and intuitive interfaces have been created. Such new interactions need to take a step beyond the point and click interaction typical of the 2D desktop environment. The interactions need to be smart, intuitive and reliable, to interpret 3D gestures and therefore, more accurate algorithms are needed to recognise patterns. In recent years, machine learning and in particular deep learning have achieved outstanding results in many branches of computer science, such as computer graphics and human-computer interface, outperforming algorithms that were considered state of the art, however, there are only fleeting efforts to translate this into virtual reality. In this thesis, we seek to apply and take advantage of deep learning models to two different content production pipeline areas embracing the following subjects of interest: advanced methods for user interaction and visual quality assessment. First, we focus on 3D sketching to retrieve models from an extensive database of complex geometries and textures, while the user is immersed in a virtual environment. We explore both 2D and 3D strokes as tools for model retrieval in VR. Therefore, we implement a novel system for improving accuracy in searching for a 3D model. We contribute an efficient method to describe models through 3D sketch via an iterative descriptor generation, focusing both on accuracy and user experience. To evaluate it, we design a user study to compare different interactions for sketch generation. Second, we explore the combination of sketch input and vocal description to correct and fine-tune the search for 3D models in a database containing fine-grained variation. We analyse sketch and speech queries, identifying a way to incorporate both of them into our system's interaction loop. Third, in the context of the visual content production pipeline, we present a detailed study of visual metrics. We propose a novel method for detecting rendering-based artefacts in images. It exploits analogous deep learning algorithms used when extracting features from sketches
- …