1,114 research outputs found
DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling
Face modeling has been paid much attention in the field of visual computing.
There exist many scenarios, including cartoon characters, avatars for social
media, 3D face caricatures as well as face-related art and design, where
low-cost interactive face modeling is a popular approach especially among
amateur users. In this paper, we propose a deep learning based sketching system
for 3D face and caricature modeling. This system has a labor-efficient
sketching interface, that allows the user to draw freehand imprecise yet
expressive 2D lines representing the contours of facial features. A novel CNN
based deep regression network is designed for inferring 3D face models from 2D
sketches. Our network fuses both CNN and shape based features of the input
sketch, and has two independent branches of fully connected layers generating
independent subsets of coefficients for a bilinear face representation. Our
system also supports gesture based interactions for users to further manipulate
initial face models. Both user studies and numerical results indicate that our
sketching system can help users create face models quickly and effectively. A
significantly expanded face database with diverse identities, expressions and
levels of exaggeration is constructed to promote further research and
evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201
Sketching-out virtual humans: A smart interface for human modelling and animation
In this paper, we present a fast and intuitive interface for sketching out
3D virtual humans and animation. The user draws stick figure key frames first and
chooses one for āfleshing-outā with freehand body contours. The system
automatically constructs a plausible 3D skin surface from the rendered figure, and
maps it onto the posed stick figures to produce the 3D character animation. A
ācreative model-based methodā is developed, which performs a human perception
process to generate 3D human bodies of various body sizes, shapes and fat
distributions. In this approach, an anatomical 3D generic model has been created with
three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially
through rigid morphing, fatness morphing, and surface fitting to match the original
2D sketch. An auto-beautification function is also offered to regularise the 3D
asymmetrical bodies from usersā imperfect figure sketches. Our current system
delivers character animation in various forms, including articulated figure animation,
3D mesh model animation, 2D contour figure animation, and even 2D NPR animation
with personalised drawing styles. The system has been formally tested by various
users on Tablet PC. After minimal training, even a beginner can create vivid virtual
humans and animate them within minutes
VR-Notes: A Perspective-Based, Multimedia Annotation System in Virtual Reality
Virtual reality (VR) has begun to emerge as a new technology in the commercial and research space, and many people have begun to utilize VR technologies in their workflows. To improve user productivity in these scenarios, annotation systems in VR allow users to capture insights and observations while in VR sessions. In the digital, 3D world of VR, we can design annotation systems to take advantage of these capabilities to provide a richer annotation viewing experience. I propose VR-Notes, a design for a new annotation system in VR that focuses on capturing the annotator\u27s perspective for both doodle annotations and audio annotations, as well as various features that improve the viewing experience of these annotations at a later time. Early results from my experiment showed that the VR-Notes doodle method required 53%, 44%, 51% less movement and 42%, 41%, 45% less rotation (head, left controller, and right controller respectively) when compared to a popular 3D freehand drawing method. Additionally, users preferred and scored the VR Notes doodle method higher when compared to the freehand drawing method
Image Retrieval within Augmented Reality
Die vorliegende Arbeit untersucht das Potenzial von Augmented Reality zur Verbesserung von Image Retrieval Prozessen. Herausforderungen in Design und Gebrauchstauglichkeit wurden fĆ¼r beide Forschungsbereiche dargelegt und genutzt, um Designziele fĆ¼r Konzepte zu entwerfen. Eine Taxonomie fĆ¼r Image Retrieval in Augmented Reality wurde basierend auf der Forschungsarbeit entworfen und eingesetzt, um verwandte Arbeiten und generelle Ideen fĆ¼r Interaktionsmƶglichkeiten zu strukturieren. Basierend auf der Taxonomie wurden Anwendungsszenarien als weitere Anforderungen fĆ¼r Konzepte formuliert. Mit Hilfe der generellen Ideen und Anforderungen wurden zwei umfassende Konzepte fĆ¼r Image Retrieval in Augmented Reality ausgearbeitet. Eins der Konzepte wurde auf einer Microsoft HoloLens umgesetzt und in einer Nutzerstudie evaluiert. Die Studie zeigt, dass das Konzept grundsƤtzlich positiv aufgenommen wurde und bietet Erkenntnisse Ć¼ber unterschiedliches Verhalten im Raum und verschiedene Suchstrategien bei der DurchfĆ¼hrung von Image Retrieval in der erweiterten RealitƤt.:1 Introduction
1.1 Motivation and Problem Statement
1.1.1 Augmented Reality and Head-Mounted Displays
1.1.2 Image Retrieval
1.1.3 Image Retrieval within Augmented Reality
1.2 Thesis Structure
2 Foundations of Image Retrieval and Augmented Reality
2.1 Foundations of Image Retrieval
2.1.1 Deļ¬nition of Image Retrieval
2.1.2 Classiļ¬cation of Image Retrieval Systems
2.1.3 Design and Usability in Image Retrieval
2.2 Foundations of Augmented Reality
2.2.1 Deļ¬nition of Augmented Reality
2.2.2 Augmented Reality Design and Usability
2.3 Taxonomy for Image Retrieval within Augmented Reality
2.3.1 Session Parameters
2.3.2 Interaction Process
2.3.3 Summary of the Taxonomy
3 Concepts for Image Retrieval within Augmented Reality
3.1 Related Work
3.1.1 Natural Query Speciļ¬cation
3.1.2 Situated Result Visualization
3.1.3 3D Result Interaction
3.1.4 Summary of Related Work
3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality
3.2.1 Natural Query Speciļ¬cation
3.2.2 Situated Result Visualization
3.2.3 3D Result Interaction
3.3 Requirements for Comprehensive Concepts
3.3.1 Design Goals
3.3.2 Application Scenarios
3.4 Comprehensive Concepts
3.4.1 Tangible Query Workbench
3.4.2 Situated Photograph Queries
3.4.3 Conformance of Concept Requirements
4 Prototypic Implementation of Situated Photograph Queries
4.1 Implementation Design
4.1.1 Implementation Process
4.1.2 Structure of the Implementation
4.2 Developer and User Manual
4.2.1 Setup of the Prototype
4.2.2 Usage of the Prototype
4.3 Discussion of the Prototype
5 Evaluation of Prototype and Concept by User Study
5.1 Design of the User Study
5.1.1 Usability Testing
5.1.2 Questionnaire
5.2 Results
5.2.1 Logging of User Behavior
5.2.2 Rating through Likert Scales
5.2.3 Free Text Answers and Remarks during the Study
5.2.4 Observations during the Study
5.2.5 Discussion of Results
6 Conclusion
6.1 Summary of the Present Work
6.2 Outlook on Further WorkThe present work investigates the potential of augmented reality for improving the image retrieval process. Design and usability challenges were identiļ¬ed for both ļ¬elds of research in order to formulate design goals for the development of concepts. A taxonomy for image retrieval within augmented reality was elaborated based on research work and used to structure related work and basic ideas for interaction. Based on the taxonomy, application scenarios were formulated as further requirements for concepts. Using the basic interaction ideas and the requirements, two comprehensive concepts for image retrieval within augmented reality were elaborated. One of the concepts was implemented using a Microsoft HoloLens and evaluated in a user study. The study showed that the concept was rated generally positive by the users and provided insight in different spatial behavior and search strategies when practicing image retrieval in augmented reality.:1 Introduction
1.1 Motivation and Problem Statement
1.1.1 Augmented Reality and Head-Mounted Displays
1.1.2 Image Retrieval
1.1.3 Image Retrieval within Augmented Reality
1.2 Thesis Structure
2 Foundations of Image Retrieval and Augmented Reality
2.1 Foundations of Image Retrieval
2.1.1 Deļ¬nition of Image Retrieval
2.1.2 Classiļ¬cation of Image Retrieval Systems
2.1.3 Design and Usability in Image Retrieval
2.2 Foundations of Augmented Reality
2.2.1 Deļ¬nition of Augmented Reality
2.2.2 Augmented Reality Design and Usability
2.3 Taxonomy for Image Retrieval within Augmented Reality
2.3.1 Session Parameters
2.3.2 Interaction Process
2.3.3 Summary of the Taxonomy
3 Concepts for Image Retrieval within Augmented Reality
3.1 Related Work
3.1.1 Natural Query Speciļ¬cation
3.1.2 Situated Result Visualization
3.1.3 3D Result Interaction
3.1.4 Summary of Related Work
3.2 Basic Interaction Concepts for Image Retrieval in Augmented Reality
3.2.1 Natural Query Speciļ¬cation
3.2.2 Situated Result Visualization
3.2.3 3D Result Interaction
3.3 Requirements for Comprehensive Concepts
3.3.1 Design Goals
3.3.2 Application Scenarios
3.4 Comprehensive Concepts
3.4.1 Tangible Query Workbench
3.4.2 Situated Photograph Queries
3.4.3 Conformance of Concept Requirements
4 Prototypic Implementation of Situated Photograph Queries
4.1 Implementation Design
4.1.1 Implementation Process
4.1.2 Structure of the Implementation
4.2 Developer and User Manual
4.2.1 Setup of the Prototype
4.2.2 Usage of the Prototype
4.3 Discussion of the Prototype
5 Evaluation of Prototype and Concept by User Study
5.1 Design of the User Study
5.1.1 Usability Testing
5.1.2 Questionnaire
5.2 Results
5.2.1 Logging of User Behavior
5.2.2 Rating through Likert Scales
5.2.3 Free Text Answers and Remarks during the Study
5.2.4 Observations during the Study
5.2.5 Discussion of Results
6 Conclusion
6.1 Summary of the Present Work
6.2 Outlook on Further Wor
DifferSketching: How Differently Do People Sketch 3D Objects?
Multiple sketch datasets have been proposed to understand how people draw 3D
objects. However, such datasets are often of small scale and cover a small set
of objects or categories. In addition, these datasets contain freehand sketches
mostly from expert users, making it difficult to compare the drawings by expert
and novice users, while such comparisons are critical in informing more
effective sketch-based interfaces for either user groups. These observations
motivate us to analyze how differently people with and without adequate drawing
skills sketch 3D objects. We invited 70 novice users and 38 expert users to
sketch 136 3D objects, which were presented as 362 images rendered from
multiple views. This leads to a new dataset of 3,620 freehand multi-view
sketches, which are registered with their corresponding 3D objects under
certain views. Our dataset is an order of magnitude larger than the existing
datasets. We analyze the collected data at three levels, i.e., sketch-level,
stroke-level, and pixel-level, under both spatial and temporal characteristics,
and within and across groups of creators. We found that the drawings by
professionals and novices show significant differences at stroke-level, both
intrinsically and extrinsically. We demonstrate the usefulness of our dataset
in two applications: (i) freehand-style sketch synthesis, and (ii) posing it as
a potential benchmark for sketch-based 3D reconstruction. Our dataset and code
are available at https://chufengxiao.github.io/DifferSketching/.Comment: SIGGRAPH Asia 2022 (Journal Track
SENS: Sketch-based Implicit Neural Shape Modeling
We present SENS, a novel method for generating and editing 3D models from
hand-drawn sketches, including those of an abstract nature. Our method allows
users to quickly and easily sketch a shape, and then maps the sketch into the
latent space of a part-aware neural implicit shape architecture. SENS analyzes
the sketch and encodes its parts into ViT patch encoding, then feeds them into
a transformer decoder that converts them to shape embeddings, suitable for
editing 3D neural implicit shapes. SENS not only provides intuitive
sketch-based generation and editing, but also excels in capturing the intent of
the user's sketch to generate a variety of novel and expressive 3D shapes, even
from abstract sketches. We demonstrate the effectiveness of our model compared
to the state-of-the-art using objective metric evaluation criteria and a
decisive user study, both indicating strong performance on sketches with a
medium level of abstraction. Furthermore, we showcase its intuitive
sketch-based shape editing capabilities.Comment: 18 pages, 18 figure
- ā¦