2,088 research outputs found

    Geometric modeling and optimization over regular domains for graphics and visual computing

    Get PDF
    The effective construction of parametric representation of complicated geometric objects can facilitate many design, analysis, and simulation tasks in Computer-Aided Design (CAD), Computer-Aided Manufacturing (CAM), and Computer-Aided Engineering (CAE). Given a 3D shape, the procedure of finding such a parametric representation upon a canonical domain is called geometric parameterization. Regular geometric regions, such as polycubes and spheres, are desirable domains for parameterization. Parametric representations defined upon regular geometric domains have many desirable mathematical properties and can facilitate or simplify various surface/solid modeling and processing computation. This dissertation studies the construction of parameterization on regular geometric domains and explores their applications in shape modeling and computer-aided design. Specifically, we studies (1) the surface parameterization on the spherical domain for closed genus-zero surfaces; (2) the surface parameterization on the polycube domain for general closed surfaces; and (3) the volumetric parameterization for 3D-manifolds embedded in 3D Euclidean space. We propose novel computational models to solve these geometric problems. Our computational models reduce to nonlinear optimizations with various geometric constraints. Hence, we also need to explore effective optimization algorithms. The main contributions of this dissertation are three-folded. (1) We developed an effective progressive spherical parameterization algorithm, with an efficient nonlinear optimization scheme subject to the spherical constraint. Compared with the state-of-the-art spherical mapping algorithms, our algorithm demonstrates the advantages of great efficiency, lower distortion, and guaranteed bijectiveness, and we show its applications in spherical harmonic decomposition and shape analysis. (2) We propose a first topology-preserving polycube domain optimization algorithm that simultaneously optimizes polycube domain together with the parameterization to balance the mapping distortion and domain simplicity. We develop effective nonlinear geometric optimization algorithms dealing with variables with and without derivatives. This polycube parameterization algorithm can benefit the regular quadrilateral mesh generation and cross-surface parameterization. (3) We develop a novel quaternion-based optimization framework for 3D frame field construction and volumetric parameterization computation. We demonstrate our constructed 3D frame field has better smoothness, compared with state-of-the-art algorithms, and is effective in guiding low-distortion volumetric parameterization and high-quality hexahedral mesh generation

    hole˙filling˙journal Filling Holes in Triangular Meshes Using Digital Images by Curve Unfolding ∗

    Get PDF
    We propose a novel approach to automatically fill holes in triangulated models. Each hole is filled using a minimum energy surface that is obtained in three steps. First, we unfold the hole boundary onto a plane using energy minimization. Second, we triangulate the unfolded hole using a constrained Delaunay triangulation. Third, we embed the triangular mesh as a minimum energy surface in R 3. When embedding the triangular mesh, any energy function can be used to estimate the missing data. We use a variational multi-view approach to estimate the missing data. The running time of the method depends primarily on the size of the hole boundary and not on the size of the model, thereby makin

    Video normals from colored lights

    Get PDF
    We present an algorithm and the associated single-view capture methodology to acquire the detailed 3D shape, bends, and wrinkles of deforming surfaces. Moving 3D data has been difficult to obtain by methods that rely on known surface features, structured light, or silhouettes. Multispectral photometric stereo is an attractive alternative because it can recover a dense normal field from an untextured surface. We show how to capture such data, which in turn allows us to demonstrate the strengths and limitations of our simple frame-to-frame registration over time. Experiments were performed on monocular video sequences of untextured cloth and faces with and without white makeup. Subjects were filmed under spatially separated red, green, and blue lights. Our first finding is that the color photometric stereo setup is able to produce smoothly varying per-frame reconstructions with high detail. Second, when these 3D reconstructions are augmented with 2D tracking results, one can register both the surfaces and relax the homogenous-color restriction of the single-hue subject. Quantitative and qualitative experiments explore both the practicality and limitations of this simple multispectral capture system

    A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint

    Get PDF
    3D shape editing is widely used in a range of applications such as movie production, computer games and computer aided design. It is also a popular research topic in computer graphics and computer vision. In past decades, researchers have developed a series of editing methods to make the editing process faster, more robust, and more reliable. Traditionally, the deformed shape is determined by the optimal transformation and weights for an energy term. With increasing availability of 3D shapes on the Internet, data-driven methods were proposed to improve the editing results. More recently as the deep neural networks became popular, many deep learning based editing methods have been developed in this field, which is naturally data-driven. We mainly survey recent research works from the geometric viewpoint to those emerging neural deformation techniques and categorize them into organic shape editing methods and man-made model editing methods. Both traditional methods and recent neural network based methods are reviewed

    Patient-specific anatomical illustration via model-guided texture synthesis

    Get PDF
    Medical illustrations can make powerful use of textures to attractively, effectively, and efficiently visualize the appearance of the surface or cut surface of anatomic structures. It can do this by implying the anatomic structure's physical composition and clarifying its identity and 3-D shape. Current visualization methods are only capable of conveying detailed information about the orientation, internal structure, and other local properties of the anatomical objects for a typical individual, not for a particular patient. Although one can derive the shape of the individual patient's object from CT or MRI, it is important to apply these illustrative techniques to those particular shapes. In this research patient-specific anatomical illustrations are created by model-guided texture synthesis (MGTS). Given 2D exemplar textures and model-based guidance information as input, MGTS uses exemplar-based texture synthesis techniques to create patient-specific surface and solid textures. It consists of three main components. The first component includes a novel texture metamorphosis approach for creating interpolated exemplar textures given two exemplar textures. This component uses an energy optimization scheme derived from optimal control principles that utilizes intensity and structure information in obtaining the transformation. The second component consists of creating the model-based guidance information, such as directions and layers, for that specific model. This component uses coordinates implied by discrete medial 3D anatomical models (m-reps). The last component accomplishes exemplar-based texture synthesis by textures whose characteristics are spatially variant on and inside the 3D models. It considers the exemplar textures from the first component and guidance information from the second component in synthesizing high-quality, high-resolution solid and surface textures. Patient-specific illustrations with a variety of textures for different anatomical models, such as muscles and bones, are shown to be useful for our clinician to comprehend the shape of the models under radiation dose and to distinguish the models from one another

    Neural Actor: Neural Free-view Synthesis of Human Actors with Pose Control

    Get PDF
    We propose Neural Actor (NA), a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses. Our method is built upon recent neural scene representation and rendering works which learn representations of geometry and appearance from only 2D images. While existing works demonstrated compelling rendering of static scenes and playback of dynamic scenes, photo-realistic reconstruction and rendering of humans with neural implicit methods, in particular under user-controlled novel poses, is still difficult. To address this problem, we utilize a coarse body model as the proxy to unwarp the surrounding 3D space into a canonical pose. A neural radiance field learns pose-dependent geometric deformations and pose- and view-dependent appearance effects in the canonical space from multi-view video input. To synthesize novel views of high fidelity dynamic geometry and appearance, we leverage 2D texture maps defined on the body model as latent variables for predicting residual deformations and the dynamic appearance. Experiments demonstrate that our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses. Furthermore, our method also supports body shape control of the synthesized results
    • …
    corecore