5 research outputs found
HiFi-123: Towards High-fidelity One Image to 3D Content Generation
Recent advances in text-to-image diffusion models have enabled 3D generation
from a single image. However, current image-to-3D methods often produce
suboptimal results for novel views, with blurred textures and deviations from
the reference image, limiting their practical applications. In this paper, we
introduce HiFi-123, a method designed for high-fidelity and multi-view
consistent 3D generation. Our contributions are twofold: First, we propose a
reference-guided novel view enhancement technique that substantially reduces
the quality gap between synthesized and reference views. Second, capitalizing
on the novel view enhancement, we present a novel reference-guided state
distillation loss. When incorporated into the optimization-based image-to-3D
pipeline, our method significantly improves 3D generation quality, achieving
state-of-the-art performance. Comprehensive evaluations demonstrate the
effectiveness of our approach over existing methods, both qualitatively and
quantitatively
DynamiCrafter: Animating Open-domain Images with Video Diffusion Priors
Animating a still image offers an engaging visual experience. Traditional
image animation techniques mainly focus on animating natural scenes with
stochastic dynamics (e.g. clouds and fluid) or domain-specific motions (e.g.
human hair or body motions), and thus limits their applicability to more
general visual content. To overcome this limitation, we explore the synthesis
of dynamic content for open-domain images, converting them into animated
videos. The key idea is to utilize the motion prior of text-to-video diffusion
models by incorporating the image into the generative process as guidance.
Given an image, we first project it into a text-aligned rich context
representation space using a query transformer, which facilitates the video
model to digest the image content in a compatible fashion. However, some visual
details still struggle to be preserved in the resultant videos. To supplement
with more precise image information, we further feed the full image to the
diffusion model by concatenating it with the initial noises. Experimental
results show that our proposed method can produce visually convincing and more
logical & natural motions, as well as higher conformity to the input image.
Comparative evaluation demonstrates the notable superiority of our approach
over existing competitors.Comment: Project page: https://doubiiu.github.io/projects/DynamiCrafte
NOFA: NeRF-based One-shot Facial Avatar Reconstruction
3D facial avatar reconstruction has been a significant research topic in
computer graphics and computer vision, where photo-realistic rendering and
flexible controls over poses and expressions are necessary for many related
applications. Recently, its performance has been greatly improved with the
development of neural radiance fields (NeRF). However, most existing NeRF-based
facial avatars focus on subject-specific reconstruction and reenactment,
requiring multi-shot images containing different views of the specific subject
for training, and the learned model cannot generalize to new identities,
limiting its further applications. In this work, we propose a one-shot 3D
facial avatar reconstruction framework that only requires a single source image
to reconstruct a high-fidelity 3D facial avatar. For the challenges of lacking
generalization ability and missing multi-view information, we leverage the
generative prior of 3D GAN and develop an efficient encoder-decoder network to
reconstruct the canonical neural volume of the source image, and further
propose a compensation network to complement facial details. To enable
fine-grained control over facial dynamics, we propose a deformation field to
warp the canonical volume into driven expressions. Through extensive
experimental comparisons, we achieve superior synthesis results compared to
several state-of-the-art methods
Analysis of the parathyroid function in maintenance hemodialysis patients from Changchun, China
Objective: To evaluate the parathyroid function in maintenance hemodialysis patients from 4 hemodialysis centers and to analyze the cause of the dysfunction. Methods: This cross-sectional study included patients with chronic renal disease undergoing maintenance hemodialysis treatment at 4 hemodialysis centers in Changchun, China, between March 2014 and August 2015. A total of 337 patients were asked to complete a questionnaire including their name, gender, age, hemodialysis duration, the use of calcium carbonate and vitamin D3 supplements, health education status, hemofiltration frequency, appetite, and education level. Serum intact parathyroid hormone (iPTH), phosphorus, total calcium, blood urea nitrogen (BUN), and creatinine (Cre) levels were obtained from clinical information. Patients with iPTH data were divided into 2 groups: Normal group: the patients with an iPTH level  100 pg/ml (136 subjects). Intergroup differences were analyzed using the t-test. The enumeration data were analyzed by the Ï2 test. Results: The iPTH levels were not monitored for 173 maintenance hemodialysis patients (51.3%) but for 164 patients (48.7%). Of the 164 patients, 28 (17.1%) had a normal serum iPTH level, while the other 136 (82.9%) had an abnormal iPTH level. The maintenance hemodialysis duration and phosphorus levels in the Abnormal group were higher than those in the Normal group (P < 0.05). The appetites of patients in the Abnormal group were better than those of patients in the Normal group (P < 0.05). Conclusions: A lower proportion of patients on hemodialysis had a normal iPTH level. The phosphorus levels of patients on hemodialysis should be controlled via dietary interventions. Keywords: Maintenance hemodialysis, Intact parathyroid hormone, Serum phosphoru