1 research outputs found

    3D Building Synthesis Based on Images and Affine Invariant Salient Features

    Get PDF
    In this thesis, we introduce a method to synthesize and recognize buildings using a set of at least two 2D images taken from different views. Based on a coarse set of affine invariant salient feature points (corner points) on the images, a 3D high-resolution building model is obtained in accordance with the observed images. Corresponding salient points are found using the ratio of triangle areas formed from a set of four consecutive ordered salient corresponding points that form two triangles. The order is obtained by finding the vertices of the convex hull of the salient points. The salient points are tessellated to form a high-resolution triangular mesh with the appearance of a triangular patch in the image imported onto the personalized 3D model. With multiple images, all coordinates and appearances are reconstructed in accordance with the observed images. The 3D model reconstruction method allows for a 3D classification of a test building to one of many possible buildings stored in the database. The classification is based on a geometric 3D point cloud error. For buildings with very close 3D point cloud errors, a further classification is achieved based on the mean squared error (MSE) on the appearance of corresponding points on the test and base models. Our method can also be used in localization when preloaded location information of each model in the database is stored, hence helping an observer navigate without a GPS system.M.S., Electrical Engineering -- Drexel University, 201
    corecore