445 research outputs found

    High-quality tree structures modelling using local convolution surface approximation

    Get PDF
    In this paper, we propose a local convolution surface approximation approach for quickly modelling tree structures with pleasing visual effect. Using our proposed local convolution surface approximation, we present a tree modelling scheme to create the structure of a tree with a single high-quality quad-only mesh. Through combining the strengths of the convolution surfaces, subdivision surfaces and GPU, our tree modelling approach achieves high efficiency and good mesh quality. With our method, we first extract the line skeletons of given tree models by contracting the meshes with the Laplace operator. Then we approximate the original tree mesh with a convolution surface based on the extracted skeletons. Next, we tessellate the tree trunks represented by convolution surfaces into quad-only subdivision surfaces with good edge flow along the skeletal directions. We implement the most time-consuming subdivision and convolution approximation on the GPU with CUDA, and demonstrate applications of our proposed approach in branch editing and tree composition

    Efficient sketch-based creation of detailed character models through data-driven mesh deformations

    Get PDF
    Creation of detailed character models is a very challenging task in animation production. Sketch-based character model creation from a 3D template provides a promising solution. However, how to quickly find correct correspondences between user's drawn sketches and the 3D template model, how to efficiently deform the 3D template model to exactly match user's drawn sketches, and realize real-time interactive modeling is still an open topic. In this paper, we propose a new approach and develop a user interface to effectively tackle this problem. Our proposed approach includes using user's drawn sketches to retrieve a most similar 3D template model from our dataset and marrying human's perception and interactions with computer's highly efficient computing to extract occluding and silhouette contours of the 3D template model and find correct correspondences quickly. We then combine skeleton-based deformation and mesh editing to deform the 3D template model to fit user's drawn sketches and create new and detailed 3D character models. The results presented in this paper demonstrate the effectiveness and advantages of our proposed approach and usefulness of our developed user interface

    Facial expression cloning optimization method based Laplace operator.

    Get PDF
    In view of the reality of facial expression cloning and efficiency of expression reconstruction, a novel method based on motion capture data is proposed. After capturing the data of six fundamental expressions, it normalizes these data to make them in the same range. Then 41 points are chosen in critical areas of facial expression and it gets cloning expression using Laplace deformation algorithm with convex weight which can preserve the details of facial expression to avoid the low fidelity of uniform weights and unstable calculation of cotangent weights. Experimental results show that this method can generate realistic and natural expression animations and the efficiency of facial expression cloning is improved significantly

    Screwing assembly oriented interactive model segmentation in HMD VR environment

    Get PDF
    © 2019 John Wiley & Sons, Ltd. Although different approaches of segmenting and assembling geometric models for 3D printing have been proposed, it is difficult to find any research studies, which investigate model segmentation and assembly in head-mounted display (HMD) virtual reality (VR) environments for 3D printing. In this work, we propose a novel and interactive segmentation method for screwing assembly in the environments to tackle this problem. Our approach divides a large model into semantic parts with a screwing interface for repeated tight assembly. Specifically, after a user places the cutting interface, our algorithm computes the bounding box of the current part automatically for subsequent multicomponent semantic Boolean segmentations. Afterwards, the bolt is positioned with an improved K3M image thinning algorithm and is used for merging paired components with union and subtraction Boolean operations respectively. Moreover, we introduce a swept Boolean-based rotation collision detection and location method to guarantee a collision-free screwing assembly. Experiments show that our approach provides a new interactive multicomponent semantic segmentation tool that supports not only repeated installation and disassembly but also tight and aligned assembly

    Semantic portrait color transfer with internet images

    Get PDF
    We present a novel color transfer method for portraits by exploring their high-level semantic information. First, a database is set up which consists of a collection of portrait images download from the Internet, and each of them is manually segmented using image matting as a preprocessing step. Second, we search the database using Face++ to find the images with similar poses to a given source portrait image, and choose one satisfactory image from the results as the target. Third, we extract portrait foregrounds from both source and target images. Then, the system extracts the semantic information, such as faces, eyes, eyebrows, lips, teeth, etc., from the extracted foreground of the source using image matting algorithms. After that, we perform color transfer between corresponding parts with the same semantic information. We get the final transferred result by seamlessly compositing different parts together using alpha blending. Experimental results show that our semantics-driven approach can generate better color transfer results for portraits than previous methods and provide users a new means to retouch their portraits

    Robust quasi-uniform surface meshing of neuronal morphology using line skeleton-based progressive convolution approximation

    Get PDF
    Creating high-quality polygonal meshes which represent the membrane surface of neurons for both visualization and numerical simulation purposes is an important yet nontrivial task, due to their irregular and complicated structures. In this paper, we develop a novel approach of constructing a watertight 3D mesh from the abstract point-and-diameter representation of the given neuronal morphology. The membrane shape of the neuron is reconstructed by progressively deforming an initial sphere with the guidance of the neuronal skeleton, which can be regarded as a digital sculpting process. To efficiently deform the surface, a local mapping is adopted to simulate the animation skinning. As a result, only the vertices within the region of influence (ROI) of the current skeletal position need to be updated. The ROI is determined based on the finite-support convolution kernel, which is convolved along the line skeleton of the neuron to generate a potential field that further smooths the overall surface at both unidirectional and bifurcating regions. Meanwhile, the mesh quality during the entire evolution is always guaranteed by a set of quasi-uniform rules, which split excessively long edges, collapse undersized ones, and adjust vertices within the tangent plane to produce regular triangles. Additionally, the local vertices density on the result mesh is decided by the radius and curvature of neurites to achieve adaptiveness

    Efficient and Realistic Character Animation through Analytical Physics-based Skin Deformation

    Get PDF
    Physics-based skin deformation methods can greatly improve the realism of character animation, but require non-trivial training, intensive manual intervention, and heavy numerical calculations. Due to these limitations, it is generally time-consuming to implement them, and difficult to achieve a high runtime efficiency. In order to tackle the above limitations caused by numerical calculations of physics-based skin deformation, we propose a simple and efficient analytical approach for physicsbased skin deformations. Specifically, we (1) employ Fourier series to convert 3D mesh models into continuous parametric representations through a conversion algorithm, which largely reduces data size and computing time but still keeps high realism, (2) introduce a partial differential equation (PDE)-based skin deformation model and successfully obtain the first analytical solution to physics-based skin deformations which overcomes the limitations of numerical calculations. Our approach is easy to use, highly efficient, and capable to create physically realistic skin deformations

    ED2IF2-Net: Learning Disentangled Deformed Implicit Fields and Enhanced Displacement Fields from Single Images Using Pyramid Vision Transformer

    Get PDF
    There has emerged substantial research in addressing single-view 3D reconstruction and the majority of the state-of-the-art implicit methods employ CNNs as the backbone network. On the other hand, transformers have shown remarkable performance in many vision tasks. However, it is still unknown whether transformers are suitable for single-view implicit 3D reconstruction. In this paper, we propose the first end-to-end single-view 3D reconstruction network based on the Pyramid Vision Transformer (PVT), called (Formula presented.) -Net, which disentangles the reconstruction of an implicit field into the reconstruction of topological structures and the recovery of surface details to achieve high-fidelity shape reconstruction. (Formula presented.) -Net uses a Pyramid Vision Transformer encoder to extract multi-scale hierarchical local features and a global vector of the input single image, which are fed into three separate decoders. A coarse shape decoder reconstructs a coarse implicit field based on the global vector, a deformation decoder iteratively refines the coarse implicit field using the pixel-aligned local features to obtain a deformed implicit field through multiple implicit field deformation blocks (IFDBs), and a surface detail decoder predicts an enhanced displacement field using the local features with hybrid attention modules (HAMs). The final output is a fusion of the deformed implicit field and the enhanced displacement field, with four loss terms applied to reconstruct the coarse implicit field, structure details through a novel deformation loss, overall shape after fusion, and surface details via a Laplacian loss. The quantitative results obtained from the ShapeNet dataset validate the exceptional performance of (Formula presented.) -Net. Notably, (Formula presented.) -Net-L stands out as the top-performing variant, exhibiting the highest mean IoU, CD, EMD, ECD-3D, and ECD-2D scores, reaching impressive values of 61.1, 7.26, 2.51, 6.08, and 1.84, respectively. The extensive experimental evaluations consistently demonstrate the state-of-the-art capabilities of (Formula presented.) -Net in terms of reconstructing topological structures and recovering surface details, all while maintaining competitive inference time

    Improving Realism of Facial Interpolation and Blendshapes with Analytical Partial Differential Equation-Represented Physics

    Get PDF
    How to create realistic shapes by interpolating two known shapes for facial blendshapes has not been investigated in the existing literature. In this paper, we propose a physics-based mathematical model and its analytical solutions to obtain more realistic facial shape changes. To this end, we first introduce the internal force of elastic beam bending into the equation of motion and integrate it with the constraints of two known shapes to develop the physics-based mathematical model represented with dynamic partial differential equations (PDEs). Second, we propose a unified mathematical expression of the external force represented with linear and various nonlinear time-dependent Fourier series, introduce it into the mathematical model to create linear and various nonlinear dynamic deformations of the curves defining a human face model, and derive analytical solutions of the mathematical model. Third, we evaluate the realism of the obtained analytical solutions in interpolating two known shapes to create new shape changes by comparing the shape changes calculated with the obtained analytical solutions and geometric linear interpolation to the ground-truth shape changes and conclude that among linear and various nonlinear PDE-based analytical solutions named as linear, quadratic, and cubic PDE-based interpolation, quadratic PDE-based interpolation creates the most realistic shape changes, which are more realistic than those obtained with the geometric linear interpolation. Finally, we use the quadratic PDE-based interpolation to develop a facial blendshape method and demonstrate that the proposed approach is more efficient than numerical physics-based facial blendshapes
    corecore