74,665 research outputs found

    Learning a human-perceived softness measure of virtual 3D objects

    Get PDF
    We introduce the problem of computing a human-perceived softness measure for virtual 3D objects. As the virtual objects do not exist in the real world, we do not directly consider their physical properties but instead compute the human-perceived softness of the geometric shapes. We collect crowdsourced data where humans rank their perception of the softness of vertex pairs on virtual 3D models. We then compute shape descriptors and use a learning to-rank approach to learn a softness measure mapping any vertex to a softness value. Finally, we demonstrate our framework with a variety of 3D shapes

    Sketching-out virtual humans: From 2d storyboarding to immediate 3d character animation

    Get PDF
    Virtual beings are playing a remarkable role in today’s public entertainment, while ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. In this paper, we present a fast and intuitive storyboarding interface, which enables users to sketch-out 3D virtual humans, 2D/3D animations, and character intercommunication. We devised an intuitive “stick figurefleshing-outskin mapping” graphical animation pipeline, which realises the whole process of key framing, 3D pose reconstruction, virtual human modelling, motion path/timing control, and the final animation synthesis by almost pure 2D sketching. A “creative model-based method” is developed, which emulates a human perception process, to generate the 3D human bodies of variational sizes, shapes, and fat distributions. Meanwhile, our current system also supports the sketch-based crowd animation and the storyboarding of the 3D multiple character intercommunication. This system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Quantitative Analysis of Saliency Models

    Full text link
    Previous saliency detection research required the reader to evaluate performance qualitatively, based on renderings of saliency maps on a few shapes. This qualitative approach meant it was unclear which saliency models were better, or how well they compared to human perception. This paper provides a quantitative evaluation framework that addresses this issue. In the first quantitative analysis of 3D computational saliency models, we evaluate four computational saliency models and two baseline models against ground-truth saliency collected in previous work.Comment: 10 page

    Sketching-out virtual humans: A smart interface for human modelling and animation

    Get PDF
    In this paper, we present a fast and intuitive interface for sketching out 3D virtual humans and animation. The user draws stick figure key frames first and chooses one for “fleshing-out” with freehand body contours. The system automatically constructs a plausible 3D skin surface from the rendered figure, and maps it onto the posed stick figures to produce the 3D character animation. A “creative model-based method” is developed, which performs a human perception process to generate 3D human bodies of various body sizes, shapes and fat distributions. In this approach, an anatomical 3D generic model has been created with three distinct layers: skeleton, fat tissue, and skin. It can be transformed sequentially through rigid morphing, fatness morphing, and surface fitting to match the original 2D sketch. An auto-beautification function is also offered to regularise the 3D asymmetrical bodies from users’ imperfect figure sketches. Our current system delivers character animation in various forms, including articulated figure animation, 3D mesh model animation, 2D contour figure animation, and even 2D NPR animation with personalised drawing styles. The system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    Sketch-based virtual human modelling and animation

    Get PDF
    Animated virtual humans created by skilled artists play a remarkable role in today’s public entertainment. However, ordinary users are still treated as audiences due to the lack of appropriate expertise, equipment, and computer skills. We developed a new method and a novel sketching interface, which enable anyone who can draw to “sketch-out” 3D virtual humans and animation. We devised a “Stick FigureFleshing-outSkin Mapping” graphical pipeline, which decomposes the complexity of figure drawing and considerably boosts the modelling and animation efficiency. We developed a gesture-based method for 3D pose reconstruction from 2D stick figure drawings. We investigated a “Creative Model-based Method”, which performs a human perception process to transfer users’ 2D freehand sketches into 3D human bodies of various body sizes, shapes and fat distributions. Our current system supports character animation in various forms including articulated figure animation, 3D mesh model animation, and 2D contour/NPR animation with personalised drawing styles. Moreover, this interface also supports sketch-based crowd animation and 2D storyboarding of 3D multiple character interactions. A preliminary user study was conducted to support the overall system design. Our system has been formally tested by various users on Tablet PC. After minimal training, even a beginner can create vivid virtual humans and animate them within minutes

    A preliminary approach to study the behavior of human fingertip at contact via experimental test and numerical model

    Get PDF
    How human fingertip deforms during the interaction with the environment represents a fundamental action that shapes our perception of external world. In this work, we present the proof of concept of an experimental in vivo set up that enables to characterize the mechanical behavior of human fingertip, in terms of contact area, force and a preliminary estimation of pressure contour, while it is put in contact against a flat rigid surface. Experimental outcomes are then compared with the output of a 3D Finite Element Model (FEM) of the human fingerpad, built upon existing validated models. The good agreement between numerical and experimental data suggests the correctness of our procedure for measurement acquisitions and finger modeling. Furthermore, we will also discuss how our experimental data can be profitably used to estimate strain limiting deformation models for tactile rendering, while the here reported 3D FE model has also been profitably employed to investigate hypotheses on human tactile perception

    Computational Model for Human 3D Shape Perception From a Single Specular Image

    Get PDF
    In natural conditions the human visual system can estimate the 3D shape of specular objects even from a single image. Although previous studies suggested that the orientation field plays a key role for 3D shape perception from specular reflections, its computational plausibility, and possible mechanisms have not been investigated. In this study, to complement the orientation field information, we first add prior knowledge that objects are illuminated from above and utilize the vertical polarity of the intensity gradient. Then we construct an algorithm that incorporates these two image cues to estimate 3D shapes from a single specular image. We evaluated the algorithm with glossy and mirrored surfaces and found that 3D shapes can be recovered with a high correlation coefficient of around 0.8 with true surface shapes. Moreover, under a specific condition, the algorithm's errors resembled those made by human observers. These findings show that the combination of the orientation field and the vertical polarity of the intensity gradient is computationally sufficient and probably reproduces essential representations used in human shape perception from specular reflections

    Learning to Generate 3D Training Data

    Full text link
    Human-level visual 3D perception ability has long been pursued by researchers in computer vision, computer graphics, and robotics. Recent years have seen an emerging line of works using synthetic images to train deep networks for single image 3D perception. Synthetic images rendered by graphics engines are a promising source for training deep neural networks because it comes with perfect 3D ground truth for free. However, the 3D shapes and scenes to be rendered are largely made manual. Besides, it is challenging to ensure that synthetic images collected this way can help train a deep network to perform well on real images. This is because graphics generation pipelines require numerous design decisions such as the selection of 3D shapes and the placement of the camera. In this dissertation, we propose automatic generation pipelines of synthetic data that aim to improve the task performance of a trained network. We explore both supervised and unsupervised directions for automatic optimization of 3D decisions. For supervised learning, we demonstrate how to optimize 3D parameters such that a trained network can generalize well to real images. We first show that we can construct a pure synthetic 3D shape to achieve state-of-the-art performance on a shape-from-shading benchmark. We further parameterize the decisions as a vector and propose a hybrid gradient approach to efficiently optimize the vector towards usefulness. Our hybrid gradient is able to outperform classic black-box approaches on a wide selection of 3D perception tasks. For unsupervised learning, we propose a novelty metric for 3D parameter evolution based on deep autoregressive models. We show that without any extrinsic motivation, the novelty computed from autoregressive models alone is helpful. Our novelty metric can consistently encourage a random synthetic generator to produce more useful training data for downstream 3D perception tasks.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163240/1/ydawei_1.pd
    • 

    corecore