17,458 research outputs found

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Imaging and Nulling with the Space Interferometry Mission

    Get PDF
    We present numerical simulations for a possible synthesis imaging mode of the Space Interferometer Mission (SIM). We summarize the general techniques that SIM offers to perform imaging of high surface brightness sources, and discuss their strengths and weaknesses. We describe an interactive software package that is used to provide realistic, photometrically correct estimates of SIM performance for various classes of astronomical objects. In particular, we simulate the cases of gaseous disks around black holes in the nuclei of galaxies, and zodiacal dust disks around young stellar objects. Regarding the first, we show that a Keplerian velocity gradient of the line-emitting gaseous disk -- and thus the mass of the putative black hole -- can be determined with SIM to unprecedented accuracy in about 5 hours of integration time for objects with H_alpha surface brigthness comparable to the prototype M 87. Detections and observations of exo-zodiacal dust disks depend critically on the disk properties and the nulling capabilities of SIM. Systems with similar disk size and at least one tenth of the dust content of beta Pic can be detected by SIM at distances between 100 pc and a few kpc, if a nulling efficiency of 1/10000 is achieved. Possible inner clear regions indicative of the presence of massive planets can also be detected and imaged. On the other hand, exo-zodiacal disks with properties more similar to the solar system will not be found in reasonable integration times with SIM.Comment: 28 pages, incl. 8 postscript figures, excl. 10 gif-figures Submitted to Ap

    To “Sketch-a-Scratch”

    Get PDF
    A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically

    Deep Sketch-Photo Face Recognition Assisted by Facial Attributes

    Full text link
    In this paper, we present a deep coupled framework to address the problem of matching sketch image against a gallery of mugshots. Face sketches have the essential in- formation about the spatial topology and geometric details of faces while missing some important facial attributes such as ethnicity, hair, eye, and skin color. We propose a cou- pled deep neural network architecture which utilizes facial attributes in order to improve the sketch-photo recognition performance. The proposed Attribute-Assisted Deep Con- volutional Neural Network (AADCNN) method exploits the facial attributes and leverages the loss functions from the facial attributes identification and face verification tasks in order to learn rich discriminative features in a common em- bedding subspace. The facial attribute identification task increases the inter-personal variations by pushing apart the embedded features extracted from individuals with differ- ent facial attributes, while the verification task reduces the intra-personal variations by pulling together all the fea- tures that are related to one person. The learned discrim- inative features can be well generalized to new identities not seen in the training data. The proposed architecture is able to make full use of the sketch and complementary fa- cial attribute information to train a deep model compared to the conventional sketch-photo recognition methods. Exten- sive experiments are performed on composite (E-PRIP) and semi-forensic (IIIT-D semi-forensic) datasets. The results show the superiority of our method compared to the state- of-the-art models in sketch-photo recognition algorithm

    Data-Driven Shape Analysis and Processing

    Full text link
    Data-driven methods play an increasingly important role in discovering geometric, structural, and semantic relationships between 3D shapes in collections, and applying this analysis to support intelligent modeling, editing, and visualization of geometric data. In contrast to traditional approaches, a key feature of data-driven approaches is that they aggregate information from a collection of shapes to improve the analysis and processing of individual shapes. In addition, they are able to learn models that reason about properties and relationships of shapes without relying on hard-coded rules or explicitly programmed instructions. We provide an overview of the main concepts and components of these techniques, and discuss their application to shape classification, segmentation, matching, reconstruction, modeling and exploration, as well as scene analysis and synthesis, through reviewing the literature and relating the existing works with both qualitative and numerical comparisons. We conclude our report with ideas that can inspire future research in data-driven shape analysis and processing.Comment: 10 pages, 19 figure

    Cross-Paced Representation Learning with Partial Curricula for Sketch-based Image Retrieval

    Get PDF
    In this paper we address the problem of learning robust cross-domain representations for sketch-based image retrieval (SBIR). While most SBIR approaches focus on extracting low- and mid-level descriptors for direct feature matching, recent works have shown the benefit of learning coupled feature representations to describe data from two related sources. However, cross-domain representation learning methods are typically cast into non-convex minimization problems that are difficult to optimize, leading to unsatisfactory performance. Inspired by self-paced learning, a learning methodology designed to overcome convergence issues related to local optima by exploiting the samples in a meaningful order (i.e. easy to hard), we introduce the cross-paced partial curriculum learning (CPPCL) framework. Compared with existing self-paced learning methods which only consider a single modality and cannot deal with prior knowledge, CPPCL is specifically designed to assess the learning pace by jointly handling data from dual sources and modality-specific prior information provided in the form of partial curricula. Additionally, thanks to the learned dictionaries, we demonstrate that the proposed CPPCL embeds robust coupled representations for SBIR. Our approach is extensively evaluated on four publicly available datasets (i.e. CUFS, Flickr15K, QueenMary SBIR and TU-Berlin Extension datasets), showing superior performance over competing SBIR methods

    Markov Weight Fields for face sketch synthesis

    Get PDF
    Posters 1C - Vision for Graphics, Sensors, Medical, Vision for Robotics, ApplicationsGreat progress has been made in face sketch synthesis in recent years. State-of-the-art methods commonly apply a Markov Random Fields (MRF) model to select local sketch patches from a set of training data. Such methods, however, have two major drawbacks. Firstly, the MRF model used cannot synthesize new sketch patches. Secondly, the optimization problem in solving the MRF is NP-hard. In this paper, we propose a novel Markov Weight Fields (MWF) model that is capable of synthesizing new sketch patches. We formulate our model into a convex quadratic programming (QP) problem to which the optimal solution is guaranteed. Based on the Markov property of our model, we further propose a cascade decomposition method (CDM) for solving such a large scale QP problem efficiently. Experimental results on the CUHK face sketch database and celebrity photos show that our model outperforms the common MRF model used in other state-of-the-art methods. © 2012 IEEE.published_or_final_versionThe IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI., 16-21 June 2012. In IEEE Conference on Computer Vision and Pattern Recognition Proceedings, 2012, p. 1091-109

    Image-to-Image Translation with Conditional Adversarial Networks

    Full text link
    We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.Comment: Website: https://phillipi.github.io/pix2pix/, CVPR 201
    • 

    corecore