5 research outputs found

    NeuS-PIR: Learning Relightable Neural Surface using Pre-Integrated Rendering

    Full text link
    Recent advances in neural implicit fields enables rapidly reconstructing 3D geometry from multi-view images. Beyond that, recovering physical properties such as material and illumination is essential for enabling more applications. This paper presents a new method that effectively learns relightable neural surface using pre-intergrated rendering, which simultaneously learns geometry, material and illumination within the neural implicit field. The key insight of our work is that these properties are closely related to each other, and optimizing them in a collaborative manner would lead to consistent improvements. Specifically, we propose NeuS-PIR, a method that factorizes the radiance field into a spatially varying material field and a differentiable environment cubemap, and jointly learns it with geometry represented by neural surface. Our experiments demonstrate that the proposed method outperforms the state-of-the-art method in both synthetic and real datasets

    LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset, Framework, and Benchmark

    Full text link
    Large language models have become a potential pathway toward achieving artificial general intelligence. Recent works on multi-modal large language models have demonstrated their effectiveness in handling visual modalities. In this work, we extend the research of MLLMs to point clouds and present the LAMM-Dataset and LAMM-Benchmark for 2D image and 3D point cloud understanding. We also establish an extensible framework to facilitate the extension of MLLMs to additional modalities. Our main contribution is three-fold: 1) We present the LAMM-Dataset and LAMM-Benchmark, which cover almost all high-level vision tasks for 2D and 3D vision. Extensive experiments validate the effectiveness of our dataset and benchmark. 2) We demonstrate the detailed methods of constructing instruction-tuning datasets and benchmarks for MLLMs, which will enable future research on MLLMs to scale up and extend to other domains, tasks, and modalities faster. 3) We provide a primary but potential MLLM training framework optimized for modalities' extension. We also provide baseline models, comprehensive experimental observations, and analysis to accelerate future research. Codes and datasets are now available at https://github.com/OpenLAMM/LAMM.Comment: 37 pages, 33 figures. Code available at https://github.com/OpenLAMM/LAMM ; Project page: https://openlamm.github.io

    DanceFormer: Music Conditioned 3D Dance Generation with Parametric Motion Transformer

    Full text link
    Generating 3D dances from music is an emerged research task that benefits a lot of applications in vision and graphics. Previous works treat this task as sequence generation, however, it is challenging to render a music-aligned long-term sequence with high kinematic complexity and coherent movements. In this paper, we reformulate it by a two-stage process, ie, a key pose generation and then an in-between parametric motion curve prediction, where the key poses are easier to be synchronized with the music beats and the parametric curves can be efficiently regressed to render fluent rhythm-aligned movements. We named the proposed method as DanceFormer, which includes two cascading kinematics-enhanced transformer-guided networks (called DanTrans) that tackle each stage, respectively. Furthermore, we propose a large-scale music conditioned 3D dance dataset, called PhantomDance, that is accurately labeled by experienced animators rather than reconstruction or motion capture. This dataset also encodes dances as key poses and parametric motion curves apart from pose sequences, thus benefiting the training of our DanceFormer. Extensive experiments demonstrate that the proposed method, even trained by existing datasets, can generate fluent, performative, and music-matched 3D dances that surpass previous works quantitatively and qualitatively. Moreover, the proposed DanceFormer, together with the PhantomDance dataset, are seamlessly compatible with industrial animation software, thus facilitating the adaptation for various downstream applications.Comment: This is the version accepted by AAAI-2

    Orthogeriatric co-managements lower early mortality in long-lived elderly hip fracture: a post-hoc analysis of a prospective study

    No full text
    Abstract Objective To evaluate the clinical effectiveness of orthogeriatric co-management care in long-lived elderly hip fracture patients (age ≥ 90). Methods Secondary analysis was conducted in long-lived hip fracture patients between 2018 to 2019 in 6 hospitals in Beijing, China. Patients were divided into the orthogeriatric co-management group (CM group) and traditional consultation mode group (TC group) depending on the management mode. With 30-day mortality as the primary outcome, multivariate regression analyses were performed after adjusting for potential covariates. 30-day mobility and quality of life were compared between groups. Results A total of 233 patients were included, 223 of whom completed follow-up (125 in CM group, 98 in TC group). The average age was 92.4 ± 2.5 years old (range 90–102). The 30-day mortality in CM group was significantly lower than that in TC group after adjustments for (2.4% vs. 10.2%; OR = 0.231; 95% CI 0.059 ~ 0.896; P = 0.034). The proportion of patients undergoing surgery and surgery performed within 48 h also favored the CM group (97.6% vs. 85.7%, P = 0.002; 74.4% vs. 24.5%, P  0.05). Conclusions For long-lived elderly hip fracture patients, orthogeriatric co-management care lowered early mortality, improved early mobility and compared with the traditional consultation mode
    corecore