62,131 research outputs found
Towards Practicality of Sketch-Based Visual Understanding
Sketches have been used to conceptualise and depict visual objects from
pre-historic times. Sketch research has flourished in the past decade,
particularly with the proliferation of touchscreen devices. Much of the
utilisation of sketch has been anchored around the fact that it can be used to
delineate visual concepts universally irrespective of age, race, language, or
demography. The fine-grained interactive nature of sketches facilitates the
application of sketches to various visual understanding tasks, like image
retrieval, image-generation or editing, segmentation, 3D-shape modelling etc.
However, sketches are highly abstract and subjective based on the perception of
individuals. Although most agree that sketches provide fine-grained control to
the user to depict a visual object, many consider sketching a tedious process
due to their limited sketching skills compared to other query/support
modalities like text/tags. Furthermore, collecting fine-grained sketch-photo
association is a significant bottleneck to commercialising sketch applications.
Therefore, this thesis aims to progress sketch-based visual understanding
towards more practicality.Comment: PhD thesis successfully defended by Ayan Kumar Bhunia, Supervisor:
Prof. Yi-Zhe Song, Thesis Examiners: Prof Stella Yu and Prof Adrian Hilto
Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation
Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes
LiveSketch: Query Perturbations for Guided Sketch-based Visual Search
LiveSketch is a novel algorithm for searching large image collections using
hand-sketched queries. LiveSketch tackles the inherent ambiguity of sketch
search by creating visual suggestions that augment the query as it is drawn,
making query specification an iterative rather than one-shot process that helps
disambiguate users' search intent. Our technical contributions are: a triplet
convnet architecture that incorporates an RNN based variational autoencoder to
search for images using vector (stroke-based) queries; real-time clustering to
identify likely search intents (and so, targets within the search embedding);
and the use of backpropagation from those targets to perturb the input stroke
sequence, so suggesting alterations to the query in order to guide the search.
We show improvements in accuracy and time-to-task over contemporary baselines
using a 67M image corpus.Comment: Accepted to CVPR 201
Recommended from our members
From on-line sketching to 2D and 3D geometry: A fuzzy knowledge based system
The paper describes the development of a fuzzy knowledge based prototype system for conceptual design. This real time system is designed to infer user’s sketching intentions, to segment sketched input and generate corresponding geometric primitives: straight lines, circles, arcs, ellipses, elliptical arcs, and B-spline curves. Topology information (connectivity, unitary constraints and pairwise constraints) is received dynamically from 2D sketched input and primitives. From the 2D topology information, a more accurate 2D geometry can be built up by applying a 2D geometric constraint solver. Subsequently, 3D geometry can be received feature by feature incrementally. Each feature can be recognised by inference knowledge in terms of matching its 2D primitive configurations and connection relationships. The system accepts not only sketched input, working as an automatic design tools, but also accepts user’s interactive input of both 2D primitives and special positional 3D primitives. This makes it easy and friendly to use. The system has been tested with a number of sketched inputs of 2D and 3D geometry
DeepSketch2Face: A Deep Learning Based Sketching System for 3D Face and Caricature Modeling
Face modeling has been paid much attention in the field of visual computing.
There exist many scenarios, including cartoon characters, avatars for social
media, 3D face caricatures as well as face-related art and design, where
low-cost interactive face modeling is a popular approach especially among
amateur users. In this paper, we propose a deep learning based sketching system
for 3D face and caricature modeling. This system has a labor-efficient
sketching interface, that allows the user to draw freehand imprecise yet
expressive 2D lines representing the contours of facial features. A novel CNN
based deep regression network is designed for inferring 3D face models from 2D
sketches. Our network fuses both CNN and shape based features of the input
sketch, and has two independent branches of fully connected layers generating
independent subsets of coefficients for a bilinear face representation. Our
system also supports gesture based interactions for users to further manipulate
initial face models. Both user studies and numerical results indicate that our
sketching system can help users create face models quickly and effectively. A
significantly expanded face database with diverse identities, expressions and
levels of exaggeration is constructed to promote further research and
evaluation of face modeling techniques.Comment: 12 pages, 16 figures, to appear in SIGGRAPH 201
- …