29 research outputs found

    Deriving 3d point clouds from terrestrial photographs comparison of different sensors and software

    Get PDF
    Terrestrial photogrammetry nowadays offers a reasonably cheap, intuitive and effective approach to 3D-modelling. However, the important choice, which sensor and which software to use is not straight forward and needs consideration as the choice will have effects on the resulting 3D point cloud and its derivatives. <br><br> We compare five different sensors as well as four different state-of-the-art software packages for a single application, the modelling of a vegetated rock face. The five sensors represent different resolutions, sensor sizes and price segments of the cameras. The software packages used are: (1) Agisoft PhotoScan Pro (1.16), (2) Pix4D (2.0.89), (3) a combination of Visual SFM (V0.5.22) and SURE (1.2.0.286), and (4) MicMac (1.0). We took photos of a vegetated rock face from identical positions with all sensors. Then we compared the results of the different software packages regarding the ease of the workflow, visual appeal, similarity and quality of the point cloud. <br><br> While PhotoScan and Pix4D offer the user-friendliest workflows, they are also “black-box” programmes giving only little insight into their processing. Unsatisfying results may only be changed by modifying settings within a module. The combined workflow of Visual SFM, SURE and CloudCompare is just as simple but requires more user interaction. MicMac turned out to be the most challenging software as it is less user-friendly. However, MicMac offers the most possibilities to influence the processing workflow. The resulting point-clouds of PhotoScan and MicMac are the most appealing

    THE REVIEWING PROCESS FOR ISPRS EVENTS

    Get PDF
    Following the first initiatives taken by the International Programme Committee of the XXIIIrd ISPRS Congress in Prague (Czech Republic) in 2016, modifications of the reviewing process of ISPRS events were further considered during the years 2017 and 2018. This evolution first targets to better fit such a process to the currents requirements and expectations of the ISPRS community. Secondly, it aims to provide unified guidelines for the different steps of the process. Under the aegis of the 2020 Congress Director and ISAC (International Science Advisory Committee) chair, several discussions were held in-between September 2017 and June 2018 with ISAC members, Technical Commission Presidents (TCP), council members, 2016 and 2020 Congress Programme Chairs. This document serves as a unique transparent basis that applies for all kinds of ISPRS events (from Congress and Geospatial Week to smaller workshops), and all categories of people that are bound to be involved in the evaluation process of scientific contributions (authors, reviewers, TCPs, &hellip; ). It also specifies the evaluation criteria for the works submitted to ISPRS events, both for full papers and abstracts. Subsequently, it helps authors to improve the content and shape of their contributions. Eventually, this paper is targeted to help new chairs to smoothly prepare their future event. The following guidelines were first adopted for the 2018 Technical Commission Symposia

    Vehicle localization by lidar point correlation improved by change detection

    Get PDF
    LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany

    Pléiades project: Assessment of georeferencing accuracy, image quality, pansharpening performence and DSM/DTM quality

    Get PDF
    Pléiades 1A and 1B are twin optical satellites of Optical and Radar Federated Earth Observation (ORFEO) program jointly running by France and Italy. They are the first satellites of Europe with sub-meter resolution. Airbus DS (formerly Astrium Geo) runs a MyGIC (formerly Pléiades Users Group) program to validate Pléiades images worldwide for various application purposes. The authors conduct three projects, one is within this program, the second is supported by BEU Scientific Research Project Program, and the third is supported by TÜBİTAK. Assessment of georeferencing accuracy, image quality, pansharpening performance and Digital Surface Model/Digital Terrain Model (DSM/DTM) quality subjects are investigated in these projects. For these purposes, triplet panchromatic (50 cm Ground Sampling Distance (GSD)) and VNIR (2 m GSD) Pléiades 1A images were investigated over Zonguldak test site (Turkey) which is urbanised, mountainous and covered by dense forest. The georeferencing accuracy was estimated with a standard deviation in X and Y (SX, SY) in the range of 0.45m by bias corrected Rational Polynomial Coefficient (RPC) orientation, using ~170 Ground Control Points (GCPs). 3D standard deviation of ±0.44m in X, ±0.51m in Y, and ±1.82m in Z directions have been reached in spite of the very narrow angle of convergence by bias corrected RPC orientation. The image quality was also investigated with respect to effective resolution, Signal to Noise Ratio (SNR) and blur coefficient. The effective resolution was estimated with factor slightly below 1.0, meaning that the image quality corresponds to the nominal resolution of 50cm. The blur coefficients were achieved between 0.39-0.46 for triplet panchromatic images, indicating a satisfying image quality. SNR is in the range of other comparable space borne images which may be caused by de-noising of Pléiades images. The pansharpened images were generated by various methods, and are validated by most common statistical metrics and also visual interpretation. The generated DSM and DTM were achieved with ±1.6m standard deviation in Z (SZ) in relation to a reference DTM.Airbus Defence and SpaceBEU/2014-47912266-01TÜBİTAK/114Y38

    Global bundle adjustment with variable orientation point distance for precise mars express orbit reconstruction

    Get PDF
    The photogrammetric bundle adjustment of line scanner image data requires a precise description of the time-dependent image orientation. For this task exterior orientation parameters of discrete points are used to model position and viewing direction of a camera trajectory via polynomials. This paper investigates the influence of the distance between these orientation points on the quality of trajectory modeling. A new method adapts the distance along the trajectory to the available image information. Compared to a constant distance as used previously, a better reconstruction of the exterior orientation is possible, especially when image quality changes within a strip. In our research we use image strips of the High Resolution Stereo Camera (HRSC), taken to map the Martian surface. Several experiments on the global image data set have been carried out to investigate how the bundle adjustment improves the image orientation, if the new method is employed. For evaluation the forward intersection errors of 3D points derived from HRSC images, as well as their remaining height differences to the MOLA DTM are used. In 13.5 % (515 of 3,828) of the image strips, taken during this ongoing mission over the last 12 years, high frequency image distortions were found. Bundle adjustment with a constant orientation point distance was able to reconstruct the orbit in 239 (46.4 %) cases. A variable orientation point distance increased this number to 507 (98.6 %).German Federal Ministry for Economic Affairs and Energy (BMWi)German Aerospace Center (DLR)/50 QM 130

    Orientation of oblique airborne image sets - Experiences from the ISPRS/Eurosdr benchmark on multi-platform photogrammetry

    Get PDF
    During the last decade the use of airborne multi camera systems increased significantly. The development in digital camera technology allows mounting several mid- or small-format cameras efficiently onto one platform and thus enables image capture under different angles. Those oblique images turn out to be interesting for a number of applications since lateral parts of elevated objects, like buildings or trees, are visible. However, occlusion or illumination differences might challenge image processing. From an image orientation point of view those multi-camera systems bring the advantage of a better ray intersection geometry compared to nadir-only image blocks. On the other hand, varying scale, occlusion and atmospheric influences which are difficult to model impose problems to the image matching and bundle adjustment tasks. In order to understand current limitations of image orientation approaches and the influence of different parameters such as image overlap or GCP distribution, a commonly available dataset was released. The originally captured data comprises of a state-of-the-art image block with very high overlap, but in the first stage of the so-called ISPRS/EUROSDR benchmark on multi-platform photogrammetry only a reduced set of images was released. In this paper some first results obtained with this dataset are presented. They refer to different aspects like tie point matching across the viewing directions, influence of the oblique images onto the bundle adjustment, the role of image overlap and GCP distribution. As far as the tie point matching is concerned we observed that matching of overlapping images pointing to the same cardinal direction, or between nadir and oblique views in general is quite successful. Due to the quite different perspective between images of different viewing directions the standard tie point matching, for instance based on interest points does not work well. How to address occlusion and ambiguities due to different views onto objects is clearly a non-solved research problem so far. In our experiments we also confirm that the obtainable height accuracy is better when all images are used in bundle block adjustment. This was also shown in other research before and is confirmed here. Not surprisingly, the large overlap of 80/80% provides much better object space accuracy – random errors seem to be about 2-3fold smaller compared to the 60/60% overlap. A comparison of different software approaches shows that newly emerged commercial packages, initially intended to work with small frame image blocks, do perform very well

    USE OF AERIAL IMAGES FOR REGULAR UPDATES OF BUILDINGS IN THE FUNDAMENTAL BASE OF GEOGRAPHIC DATA OF THE CZECH REPUBLIC

    No full text
    Digital aerial images (DAI) include position, elevation and also spectral information (visible bands and near-infrared band) about the captured area. The aim of this paper is to present the possibilities of automatic analysis of DAI for updating of the Fundamental Base of Geographic Data of the Czech Republic with a focus on buildings. Regular updates of buildings (automatic detection of new and demolished buildings) are based on the analysis of coloured point clouds created by an automatic image matching technique from each time period. The created approach compares point clouds from different time periods to each other. The advantage of this solution is that it is independent of the manner of keeping the buildings in the database. It does not matter whether the buildings in the database have correct positions and their footprints correspond to the roof shapes or external walls. The involved method is robust because a digital surface model generated by image matching techniques can contain numerous errors. Shaded areas and objects with blurred textures are problematic for automatic image correlation algorithms and lead to false results. For this reason, derived layers containing additional information are used. Shadow masks (layers with modelled shadows) are used for the verification of indications and to filter out errors in the shaded areas using a contextual evaluation. Furthermore, additional information about the road and railway networks and morphological operations of opening and closing were used to achieve more accurate results. All these information sources are then evaluated using decision logic, which uses the generally applicable rules that are available for different datasets without the need for modification. The method was tested on different datasets with various types of buildings (villages, suburbs and city centres) which cover more than 20 square kilometres. The developed solution leads to very promising results without the need of acquiring new data

    Superpixel cut for figure-ground image segmentation

    No full text

    NATIONWIDE HYBRID CHANGE DETECTION OF BUILDINGS

    No full text
    The Fundamental Base of Geographic Data of the Czech Republic (hereinafter FBGD) is a national 2D geodatabase at a 1:10,000 scale with more than 100 geographic objects. This paper describes the design of the permanent updating mechanism of buildings in FBGD. The proposed procedure belongs to the category of hybrid change detection (HCD) techniques which combine pixel-based and object-based evaluation. The main sources of information for HCD are cadastral information and bi-temporal vertical digital aerial photographs. These photographs have great information potential because they contain multispectral, position and also elevation information. Elevation information represents a digital surface model (DSM) which can be obtained using the image matching technique. Pixel-based evaluation of bi-temporal DSMs enables fast localization of places with potential building changes. These coarse results are subsequently classified through the object-based image analysis (OBIA) using spectral, textural and contextual features and GIS tools. The advantage of the two-stage evaluation is the pre-selection of locations where image segmentation (a computationally demanding part of OBIA) is performed. It is not necessary to apply image segmentation to the entire scene, but only to the surroundings of detected changes, which contributes to significantly faster processing and lower hardware requirements. The created technology is based on open-source software solutions that allow easy portability on multiple computers and parallelization of processing. This leads to significant savings of financial resources which can be expended on the further development of FBGD
    corecore