689 research outputs found

    Twisted-mass reweighting for O(a) improved Wilson fermions

    Full text link
    We test the reweighting of the quark determinant of O(a) improved Wilson fermions in the domain-decomposed hybrid Monte-Carlo algorithm. Specifically, we implement a reweighting in a twisted-mass parameter proposed by Palombi and L\"uscher in Nf=2N_{\rm f}=2 QCD. We find that at equal acceptance rate, the algorithm is significantly more stable on a 32×64332\times64^3 lattice upon switching on the reweighting parameter. At the same time, the reweighting factor does not fluctuate strongly and hence is under control. At equal statistics, the uncertainty on the pion correlator is comparable to the case of the standard, unreweighted algorithm.Comment: 7 pages, 5 figures, XXIX International Symposium On Lattice Field Theor

    BakedAvatar: Baking Neural Fields for Real-Time Head Avatar Synthesis

    Full text link
    Synthesizing photorealistic 4D human head avatars from videos is essential for VR/AR, telepresence, and video game applications. Although existing Neural Radiance Fields (NeRF)-based methods achieve high-fidelity results, the computational expense limits their use in real-time applications. To overcome this limitation, we introduce BakedAvatar, a novel representation for real-time neural head avatar synthesis, deployable in a standard polygon rasterization pipeline. Our approach extracts deformable multi-layer meshes from learned isosurfaces of the head and computes expression-, pose-, and view-dependent appearances that can be baked into static textures for efficient rasterization. We thus propose a three-stage pipeline for neural head avatar synthesis, which includes learning continuous deformation, manifold, and radiance fields, extracting layered meshes and textures, and fine-tuning texture details with differential rasterization. Experimental results demonstrate that our representation generates synthesis results of comparable quality to other state-of-the-art methods while significantly reducing the inference time required. We further showcase various head avatar synthesis results from monocular videos, including view synthesis, face reenactment, expression editing, and pose editing, all at interactive frame rates.Comment: ACM Transactions on Graphics (SIGGRAPH Asia 2023). Project Page: https://buaavrcg.github.io/BakedAvata

    Language Embedded 3D Gaussians for Open-Vocabulary Scene Understanding

    Full text link
    Open-vocabulary querying in 3D space is challenging but essential for scene understanding tasks such as object localization and segmentation. Language-embedded scene representations have made progress by incorporating language features into 3D spaces. However, their efficacy heavily depends on neural networks that are resource-intensive in training and rendering. Although recent 3D Gaussians offer efficient and high-quality novel view synthesis, directly embedding language features in them leads to prohibitive memory usage and decreased performance. In this work, we introduce Language Embedded 3D Gaussians, a novel scene representation for open-vocabulary query tasks. Instead of embedding high-dimensional raw semantic features on 3D Gaussians, we propose a dedicated quantization scheme that drastically alleviates the memory requirement, and a novel embedding procedure that achieves smoother yet high accuracy query, countering the multi-view feature inconsistencies and the high-frequency inductive bias in point-based representations. Our comprehensive experiments show that our representation achieves the best visual quality and language querying accuracy across current language-embedded representations, while maintaining real-time rendering frame rates on a single desktop GPU
    • …
    corecore