3,283 research outputs found
Airborne photogrammetry and LIDAR for DSM extraction and 3D change detection over an urban area : a comparative study
A digital surface model (DSM) extracted from stereoscopic aerial images, acquired in March 2000, is compared with a DSM derived from airborne light detection and ranging (lidar) data collected in July 2009. Three densely built-up study areas in the city centre of Ghent, Belgium, are selected, each covering approximately 0.4 km(2). The surface models, generated from the two different 3D acquisition methods, are compared qualitatively and quantitatively as to what extent they are suitable in modelling an urban environment, in particular for the 3D reconstruction of buildings. Then the data sets, which are acquired at two different epochs t(1) and t(2), are investigated as to what extent 3D (building) changes can be detected and modelled over the time interval. A difference model, generated by pixel-wise subtracting of both DSMs, indicates changes in elevation. Filters are proposed to differentiate 'real' building changes from false alarms provoked by model noise, outliers, vegetation, etc. A final 3D building change model maps all destructed and newly constructed buildings within the time interval t(2) - t(1). Based on the change model, the surface and volume of the building changes can be quantified
Assessment of a photogrammetric approach for urban DSM extraction from tri-stereoscopic satellite imagery
Built-up environments are extremely complex for 3D surface modelling purposes. The main distortions that hamper 3D reconstruction from 2D imagery are image dissimilarities, concealed areas, shadows, height discontinuities and discrepancies between smooth terrain and man-made features. A methodology is proposed to improve automatic photogrammetric extraction of an urban surface model from high resolution satellite imagery with the emphasis on strategies to reduce the effects of the cited distortions and to make image matching more robust. Instead of a standard stereoscopic approach, a digital surface model is derived from tri-stereoscopic satellite imagery. This is based on an extensive multi-image matching strategy that fully benefits from the geometric and radiometric information contained in the three images. The bundled triplet consists of an IKONOS along-track pair and an additional near-nadir IKONOS image. For the tri-stereoscopic study a densely built-up area, extending from the centre of Istanbul to the urban fringe, is selected. The accuracy of the model extracted from the IKONOS triplet, as well as the model extracted from only the along-track stereopair, are assessed by comparison with 3D check points and 3D building vector data
MAMUD : contribution of HR satellite imagery to a better monitoring, modeling and understanding of urban dynamics
In this treatise the discussion of a methodology and results of semi-automatic city DSM extrac-tion from an Ikonos triplet, is introduced. Built-up areas are known as being complex for photogrammetric purposes, partly because of the steep changes in elevation caused by buildings and urban features. To make DSM extraction more robust and to cope with the specific problems of height displacement, concealed areas and shadow, a multi-image based approach is followed. For the VHR tri-stereoscopic study an area extending from the centre of Istanbul to the urban fringe is chosen. Research will concentrate, in first phase on the development of methods to optimize the extraction of photogrammetric products from the bundled Ikonos triplet. Optimal methods need to be found to improve the radiometry and geometry of the imagery, to improve the semi-automatically derivation of DSM’s and to improve the postprocessing of the products. Secondly we will also investigate the possibilities of creating stereo models out of images from the same sensor taken on a different date, e.g. one image of the stereo pair combined with the third image. Finally the photogrammetric products derived from the Ikonos stereo pair as well as the products created out of the triplet and the constructed stereo models will be investigated by comparison with a 3D reference. This evaluation should show the increase of accuracy when multi-imagery is used instead of stereo pairs
Accuracy Assessment of Low Cost UAV Based City Modelling for Urban Planning
This paper presents an Unmanned Aerial Vehicles (UAV) based 3D city modelling approach to be used in managing and planning urban areas. While the urban growth is rapidly increasing in many places of the world, the conventional techniques do not respond to the changing environment simultaneously. For effective planning, high-resolution remote sensing is a tool for the production of 3D digital city models. In this study, it is aimed at designing the remote sensing by UAV through urban terrain. Using all the information produced from UAV imagery, high-accurate 3D city models are obtained. The analysis of XYZ data of the derived from 3D model using UAV photogrammetry revealed similar products as the terrestrial surveys which are commonly used for the last development plans and city maps. The experimental results show the effectiveness of the UAV-based 3D city modelling. The assessed accuracy of the UAV photogrammetry proved that urban planners can use it as the main tool of data collection for boundary mapping, changes monitoring and topographical surveying instead of GPS/GNSS surveying
Semantically Informed Multiview Surface Refinement
We present a method to jointly refine the geometry and semantic segmentation
of 3D surface meshes. Our method alternates between updating the shape and the
semantic labels. In the geometry refinement step, the mesh is deformed with
variational energy minimization, such that it simultaneously maximizes
photo-consistency and the compatibility of the semantic segmentations across a
set of calibrated images. Label-specific shape priors account for interactions
between the geometry and the semantic labels in 3D. In the semantic
segmentation step, the labels on the mesh are updated with MRF inference, such
that they are compatible with the semantic segmentations in the input images.
Also, this step includes prior assumptions about the surface shape of different
semantic classes. The priors induce a tight coupling, where semantic
information influences the shape update and vice versa. Specifically, we
introduce priors that favor (i) adaptive smoothing, depending on the class
label; (ii) straightness of class boundaries; and (iii) semantic labels that
are consistent with the surface orientation. The novel mesh-based
reconstruction is evaluated in a series of experiments with real and synthetic
data. We compare both to state-of-the-art, voxel-based semantic 3D
reconstruction, and to purely geometric mesh refinement, and demonstrate that
the proposed scheme yields improved 3D geometry as well as an improved semantic
segmentation
- …