326 research outputs found

    Improved Young and Heinz inequalities with the Kantorovich constant

    Full text link
    In this article, we study the further refinements and reverses of the Young and Heinz inequalities with the Kantorovich constant. These modified inequalities are used to establish corresponding operator inequalities on Hilbert space and Hilbert-Schmidt norm inequalities.Comment: 11 page

    Static and dynamic crushing of novel porous crochet-sintered metal and its filled composite tube

    Get PDF
    © 2018 Elsevier Ltd A novel porous crochet-sintered metal (PCSM) is fabricated by rolling a crocheted porous cloth and subsequent vacuum sintering using a continual single super-fine soft 304 rope twisted by 49 fibers as raw material. This work investigates the quasi-static and dynamic axial crushing response of PCSMs and their filled composite tubes. The pore structures of PCSMs are formed by inter-crocheted and multiple inter-locked rope skeletons and metallurgical bonds. The PCSMs have almost no initial impact effects with a high crushing force efficiency. Filling the PCSMs changes the deformation model of 6063 tube, improves the static crashworthiness parameters of 6063 tube by 8–25% with almost no increasing initial impact effect, and doesn't always play a positive role in dynamic absorption. Porosity has obvious influence on the quasi-static and dynamic behavior and crashworthiness of PCSMs and their filled composite tube, and the effect of porosity on dynamic crashworthiness of composite tube is greater than that on quasi-static crashworthiness of composite tube. The PCSMs and their composite tubes show great potential for application in energy absorbers. The method of filling PCSM into bare tube is possible to improve the energy absorption ability of thin-walled tube with almost no increase in the initial peak force

    MMFace4D: A Large-Scale Multi-Modal 4D Face Dataset for Audio-Driven 3D Face Animation

    Full text link
    Audio-Driven Face Animation is an eagerly anticipated technique for applications such as VR/AR, games, and movie making. With the rapid development of 3D engines, there is an increasing demand for driving 3D faces with audio. However, currently available 3D face animation datasets are either scale-limited or quality-unsatisfied, which hampers further developments of audio-driven 3D face animation. To address this challenge, we propose MMFace4D, a large-scale multi-modal 4D (3D sequence) face dataset consisting of 431 identities, 35,904 sequences, and 3.9 million frames. MMFace4D exhibits two compelling characteristics: 1) a remarkably diverse set of subjects and corpus, encompassing actors spanning ages 15 to 68, and recorded sentences with durations ranging from 0.7 to 11.4 seconds. 2) It features synchronized audio and 3D mesh sequences with high-resolution face details. To capture the subtle nuances of 3D facial expressions, we leverage three synchronized RGBD cameras during the recording process. Upon MMFace4D, we construct a non-autoregressive framework for audio-driven 3D face animation. Our framework considers the regional and composite natures of facial animations, and surpasses contemporary state-of-the-art approaches both qualitatively and quantitatively. The code, model, and dataset will be publicly available.Comment: 10 pages, 8 figures. This paper has been submitted to IEEE Transaction on MultiMedia, which is the extension of our MM2023 paper arXiv:2308.05428. The dataset is now publicly available, see Project page at https://wuhaozhe.github.io/mmface4d

    Speech-Driven 3D Face Animation with Composite and Regional Facial Movements

    Full text link
    Speech-driven 3D face animation poses significant challenges due to the intricacy and variability inherent in human facial movements. This paper emphasizes the importance of considering both the composite and regional natures of facial movements in speech-driven 3D face animation. The composite nature pertains to how speech-independent factors globally modulate speech-driven facial movements along the temporal dimension. Meanwhile, the regional nature alludes to the notion that facial movements are not globally correlated but are actuated by local musculature along the spatial dimension. It is thus indispensable to incorporate both natures for engendering vivid animation. To address the composite nature, we introduce an adaptive modulation module that employs arbitrary facial movements to dynamically adjust speech-driven facial movements across frames on a global scale. To accommodate the regional nature, our approach ensures that each constituent of the facial features for every frame focuses on the local spatial movements of 3D faces. Moreover, we present a non-autoregressive backbone for translating audio to 3D facial movements, which maintains high-frequency nuances of facial movements and facilitates efficient inference. Comprehensive experiments and user studies demonstrate that our method surpasses contemporary state-of-the-art approaches both qualitatively and quantitatively.Comment: Accepted by MM 2023, 9 pages, 7 figures. arXiv admin note: text overlap with arXiv:2303.0979
    • …
    corecore