1,348 research outputs found

    Multi-Resolution Texture Coding for Multi-Resolution 3D Meshes

    Full text link
    We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder

    HDHumans: A Hybrid Approach for High-fidelity Digital Humans

    Get PDF
    Photo-real digital human avatars are of enormous importance in graphics, asthey enable immersive communication over the globe, improve gaming andentertainment experiences, and can be particularly beneficial for AR and VRsettings. However, current avatar generation approaches either fall short inhigh-fidelity novel view synthesis, generalization to novel motions,reproduction of loose clothing, or they cannot render characters at the highresolution offered by modern displays. To this end, we propose HDHumans, whichis the first method for HD human character synthesis that jointly produces anaccurate and temporally coherent 3D deforming surface and highlyphoto-realistic images of arbitrary novel views and of motions not seen attraining time. At the technical core, our method tightly integrates a classicaldeforming character template with neural radiance fields (NeRF). Our method iscarefully designed to achieve a synergy between classical surface deformationand NeRF. First, the template guides the NeRF, which allows synthesizing novelviews of a highly dynamic and articulated character and even enables thesynthesis of novel motions. Second, we also leverage the dense pointcloudsresulting from NeRF to further improve the deforming surface via 3D-to-3Dsupervision. We outperform the state of the art quantitatively andqualitatively in terms of synthesis quality and resolution, as well as thequality of 3D surface reconstruction.<br

    Vector field processing on triangle meshes

    Get PDF
    While scalar fields on surfaces have been staples of geometry processing, the use of tangent vector fields has steadily grown in geometry processing over the last two decades: they are crucial to encoding directions and sizing on surfaces as commonly required in tasks such as texture synthesis, non-photorealistic rendering, digital grooming, and meshing. There are, however, a variety of discrete representations of tangent vector fields on triangle meshes, and each approach offers different tradeoffs among simplicity, efficiency, and accuracy depending on the targeted application. This course reviews the three main families of discretizations used to design computational tools for vector field processing on triangle meshes: face-based, edge-based, and vertex-based representations. In the process of reviewing the computational tools offered by these representations, we go over a large body of recent developments in vector field processing in the area of discrete differential geometry. We also discuss the theoretical and practical limitations of each type of discretization, and cover increasingly-common extensions such as n-direction and n-vector fields. While the course will focus on explaining the key approaches to practical encoding (including data structures) and manipulation (including discrete operators) of finite-dimensional vector fields, important differential geometric notions will also be covered: as often in Discrete Differential Geometry, the discrete picture will be used to illustrate deep continuous concepts such as covariant derivatives, metric connections, or Bochner Laplacians

    Breathing New Life into 3D Assets with Generative Repainting

    Full text link
    Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators. Broad adoption of these models is due to significant improvement in the quality of generations and efficient conditioning on various modalities, not just text. However, lifting the rich generative priors of these 2D models into 3D is challenging. Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields. We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools and demonstrate their ability to work together in a non-learned fashion. Such modularity has the intrinsic advantage of eased partial upgrades, which became an important property in such a fast-paced domain. Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools, and outputs a painted input geometry in several formats. We conduct a large-scale study on a wide range of objects and categories from the ShapeNetSem dataset and demonstrate the advantages of our approach, both qualitatively and quantitatively. Project page: https://www.obukhov.ai/repainting_3d_asset

    Text-guided Image-and-Shape Editing and Generation: A Short Survey

    Full text link
    Image and shape editing are ubiquitous among digital artworks. Graphics algorithms facilitate artists and designers to achieve desired editing intents without going through manually tedious retouching. In the recent advance of machine learning, artists' editing intents can even be driven by text, using a variety of well-trained neural networks. They have seen to be receiving an extensive success on such as generating photorealistic images, artworks and human poses, stylizing meshes from text, or auto-completion given image and shape priors. In this short survey, we provide an overview over 50 papers on state-of-the-art (text-guided) image-and-shape generation techniques. We start with an overview on recent editing algorithms in the introduction. Then, we provide a comprehensive review on text-guided editing techniques for 2D and 3D independently, where each of its sub-section begins with a brief background introduction. We also contextualize editing algorithms under recent implicit neural representations. Finally, we conclude the survey with the discussion over existing methods and potential research ideas.Comment: 10 page

    Texture Generation on 3D Meshes with Point-UV Diffusion

    Full text link
    In this work, we focus on synthesizing high-quality textures on 3D meshes. We present Point-UV diffusion, a coarse-to-fine pipeline that marries the denoising diffusion model with UV mapping to generate 3D consistent and high-quality texture images in UV space. We start with introducing a point diffusion model to synthesize low-frequency texture components with our tailored style guidance to tackle the biased color distribution. The derived coarse texture offers global consistency and serves as a condition for the subsequent UV diffusion stage, aiding in regularizing the model to generate a 3D consistent UV texture image. Then, a UV diffusion model with hybrid conditions is developed to enhance the texture fidelity in the 2D UV space. Our method can process meshes of any genus, generating diversified, geometry-compatible, and high-fidelity textures. Code is available at https://cvmi-lab.github.io/Point-UV-DiffusionComment: Accepted to ICCV 2023, Ora

    Neural Point-based Volumetric Avatar: Surface-guided Neural Points for Efficient and Photorealistic Volumetric Head Avatar

    Full text link
    Rendering photorealistic and dynamically moving human heads is crucial for ensuring a pleasant and immersive experience in AR/VR and video conferencing applications. However, existing methods often struggle to model challenging facial regions (e.g., mouth interior, eyes, hair/beard), resulting in unrealistic and blurry results. In this paper, we propose {\fullname} ({\name}), a method that adopts the neural point representation as well as the neural volume rendering process and discards the predefined connectivity and hard correspondence imposed by mesh-based approaches. Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map, achieving increased modeling capacity and more accurate control. We introduce three technical innovations to improve the rendering and training efficiency: a patch-wise depth-guided (shading point) sampling strategy, a lightweight radiance decoding process, and a Grid-Error-Patch (GEP) ray sampling strategy during training. By design, our {\name} is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars. Experiments conducted on three subjects from the Multiface dataset demonstrate the effectiveness of our designs, outperforming previous state-of-the-art methods, especially in handling challenging facial regions
    • …
    corecore