Using synthesized images to boost the performance of perception models is a
long-standing research challenge in computer vision. It becomes more eminent in
visual-centric autonomous driving systems with multi-view cameras as some
long-tail scenarios can never be collected. Guided by the BEV segmentation
layouts, the existing generative networks seem to synthesize photo-realistic
street-view images when evaluated solely on scene-level metrics. However, once
zoom-in, they usually fail to produce accurate foreground and background
details such as heading. To this end, we propose a two-stage generative method,
dubbed BEVControl, that can generate accurate foreground and background
contents. In contrast to segmentation-like input, it also supports sketch style
input, which is more flexible for humans to edit. In addition, we propose a
comprehensive multi-level evaluation protocol to fairly compare the quality of
the generated scene, foreground object, and background geometry. Our extensive
experiments show that our BEVControl surpasses the state-of-the-art method,
BEVGen, by a significant margin, from 5.89 to 26.80 on foreground segmentation
mIoU. In addition, we show that using images generated by BEVControl to train
the downstream perception model, it achieves on average 1.29 improvement in NDS
score.Comment: 13 pages, 8 figure