5 research outputs found
OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields
The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has
increased interest in 3D scene editing. An essential task in editing is
removing objects from a scene while ensuring visual reasonability and multiview
consistency. However, current methods face challenges such as time-consuming
object labeling, limited capability to remove specific targets, and compromised
rendering quality after removal. This paper proposes a novel object-removing
pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given
points or text prompts on a single view, achieving better performance in less
time than previous works. Our method spreads user annotations to all views
through 3D geometry and sparse correspondence, ensuring 3D consistency with
less processing burden. Then recent 2D segmentation model Segment-Anything
(SAM) is applied to predict masks, and a 2D inpainting model is used to
generate color supervision. Finally, our algorithm applies depth supervision
and perceptual loss to maintain consistency in geometry and appearance after
object removal. Experimental results demonstrate that our method achieves
better editing quality with less time than previous works, considering both
quality and quantity.Comment: project site: https://ornerf.github.io/ (codes available
Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior
Recent works on text-to-3d generation show that using only 2D diffusion
supervision for 3D generation tends to produce results with inconsistent
appearances (e.g., faces on the back view) and inaccurate shapes (e.g., animals
with extra legs). Existing methods mainly address this issue by retraining
diffusion models with images rendered from 3D data to ensure multi-view
consistency while struggling to balance 2D generation quality with 3D
consistency. In this paper, we present a new framework Sculpt3D that equips the
current pipeline with explicit injection of 3D priors from retrieved reference
objects without re-training the 2D diffusion model. Specifically, we
demonstrate that high-quality and diverse 3D geometry can be guaranteed by
keypoints supervision through a sparse ray sampling approach. Moreover, to
ensure accurate appearances of different views, we further modulate the output
of the 2D diffusion model to the correct patterns of the template views without
altering the generated object's style. These two decoupled designs effectively
harness 3D information from reference objects to generate 3D objects while
preserving the generation quality of the 2D diffusion model. Extensive
experiments show our method can largely improve the multi-view consistency
while retaining fidelity and diversity. Our project page is available at:
https://stellarcheng.github.io/Sculpt3D/.Comment: Accepted by CVPR 2024. Project Page:
https://stellarcheng.github.io/Sculpt3D