Text-to-image diffusion models are gradually introduced into computer
graphics, recently enabling the development of Text-to-3D pipelines in an open
domain. However, for interactive editing purposes, local manipulations of
content through a simplistic textual interface can be arduous. Incorporating
user guided sketches with Text-to-image pipelines offers users more intuitive
control. Still, as state-of-the-art Text-to-3D pipelines rely on optimizing
Neural Radiance Fields (NeRF) through gradients from arbitrary rendering views,
conditioning on sketches is not straightforward. In this paper, we present
SKED, a technique for editing 3D shapes represented by NeRFs. Our technique
utilizes as few as two guiding sketches from different views to alter an
existing neural field. The edited region respects the prompt semantics through
a pre-trained diffusion model. To ensure the generated output adheres to the
provided sketches, we propose novel loss functions to generate the desired
edits while preserving the density and radiance of the base instance. We
demonstrate the effectiveness of our proposed method through several
qualitative and quantitative experiments