This paper proposes anew method for the selection of sets ofomnidirectional views, which contribute together to the efficient representation of a 3d scene. When the 3d surface is modelled as a function on a unit sphere, the view selection problem is mostly governed by the accuracy of the 3d surface reconstruction from non-uniformly sampled datasets. A novel method is proposed for the reconstruction of signals on the sphere from scattered data, using a generalization of the Spherical Fourier Transform. With that reconstruction strategy, an algorithm is then proposed to select the best subset ofn views, from a predefined set of viewpoints, in order to minimize the overall reconstruction error. Starting from initial viewpoints determined by the frequency distribution of the 3d scene, the algorithm iteratively refines the selection of each of the viewpoints, in order to maximize the quality of the representation. Experiments show that the algorithm converges towards a minimal distortion, and demonstrate that the selection of omnidirectional views is consistent with the frequency characteristics of the 3d scene. IndexTerms- imagebased rendering, camera positioning, nonuniform sampling, omnidirectional vision, 3D scene representation 1
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.