Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation

Abstract

We present a novel method for performing flexible, 3D-aware image contentmanipulation while enabling high-quality novel view synthesis. While NeRF-basedapproaches are effective for novel view synthesis, such models memorize theradiance for every point in a scene within a neural network. Since these modelsare scene-specific and lack a 3D scene representation, classical editing suchas shape manipulation, or combining scenes is not possible. Hence, editing andcombining NeRF-based scenes has not been demonstrated. With the aim ofobtaining interpretable and controllable scene representations, our modelcouples learnt scene-specific feature volumes with a scene agnostic neuralrendering network. With this hybrid representation, we decouple neuralrendering from scene-specific geometry and appearance. We can generalize tonovel scenes by optimizing only the scene-specific 3D feature representation,while keeping the parameters of the rendering network fixed. The renderingfunction learnt during the initial training stage can thus be easily applied tonew scenes, making our approach more flexible. More importantly, since thefeature volumes are independent of the rendering model, we can manipulate andcombine scenes by editing their corresponding feature volumes. The editedvolume can then be plugged into the rendering model to synthesize high-qualitynovel views. We demonstrate various scene manipulations, including mixingscenes, deforming objects and inserting objects into scenes, while stillproducing photo-realistic results.<br

    Similar works