Estimating positions of world points from features observed in images is a
key problem in 3D reconstruction, image mosaicking,simultaneous localization
and mapping and structure from motion. We consider a special instance in which
there is a dominant ground plane G viewed from a parallel viewing
plane S above it. Such instances commonly arise, for example, in
aerial photography. Consider a world point gβG and its worst
case reconstruction uncertainty Ξ΅(g,S) obtained by
merging \emph{all} possible views of g chosen from S. We first
show that one can pick two views spβ and sqβ such that the uncertainty
Ξ΅(g,{spβ,sqβ}) obtained using only these two views is almost as
good as (i.e. within a small constant factor of) Ξ΅(g,S).
Next, we extend the result to the entire ground plane G and show
that one can pick a small subset of Sβ²βS (which
grows only linearly with the area of G) and still obtain a constant
factor approximation, for every point gβG, to the minimum worst
case estimate obtained by merging all views in S. Finally, we
present a multi-resolution view selection method which extends our techniques
to non-planar scenes. We show that the method can produce rich and accurate
dense reconstructions with a small number of views. Our results provide a view
selection mechanism with provable performance guarantees which can drastically
increase the speed of scene reconstruction algorithms. In addition to
theoretical results, we demonstrate their effectiveness in an application where
aerial imagery is used for monitoring farms and orchards