154,274 research outputs found
Dense Point Cloud Extraction From Oblique Imagery
With the increasing availability of low-cost digital cameras with small or medium sized sensors, more and more airborne images are available with high resolution, which enhances the possibility in establishing three dimensional models for urban areas. The high accuracy of representation of buildings in urban areas is required for asset valuation or disaster recovery. Many automatic methods for modeling and reconstruction are applied to aerial images together with Light Detection and Ranging (LiDAR) data. If LiDAR data are not provided, manual steps must be applied, which results in semi-automated technique.
The automated extraction of 3D urban models can be aided by the automatic extraction of dense point clouds. The more dense the point clouds, the easier the modeling and the higher the accuracy. Also oblique aerial imagery provides more facade information than nadir images, such as building height and texture. So a method for automatic dense point cloud extraction from oblique images is desired.
In this thesis, a modified workflow for the automated extraction of dense point clouds from oblique images is proposed and tested. The result reveals that this modified workflow works well and a very dense point cloud can be extracted from only two oblique images with slightly higher accuracy in flat areas than the one extracted by the original workflow.
The original workflow was established by previous research at the Rochester Institute of Technology (RIT) for point cloud extraction from nadir images. For oblique images, a first modification is proposed in the feature detection part by replacing the Scale-Invariant Feature Transform (SIFT) algorithm with the Affine Scale-Invariant Feature Transform (ASIFT) algorithm. After that, in order to realize a very dense point cloud, the Semi-Global Matching (SGM) algorithm is implemented in the second modification to compute the disparity map from a stereo image pair, which can then be used to reproject pixels back to a point cloud. A noise removal step is added in the third modification. The point cloud from the modified workflow is much denser compared to the result from the original workflow.
An accuracy assessment is made in the end to evaluate the point cloud extracted from the modified workflow. From the two flat areas, subsets of points are selected from both original and modified workflow, and then planes are fitted to them, respectively. The Mean Squared Error (MSE) of the points to the fitted plane is compared. The point subsets from the modified workflow have slightly lower MSEs than the ones from the original workflow, respectively. This suggests a much more dense and more accurate point cloud can lead to clear roof borders for roof extraction and improve the possibility of 3D feature detection for 3D point cloud registration
Dense Point-Cloud Representation of a Scene using Monocular Vision
We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings.
We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique.
We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased.
Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques
- …