7 research outputs found

    QUALITE DES MODELES NUMERIQUES DE TERRAIN DERVIES PAR CORRELATION AUTOMATIQUE

    Get PDF
    Digital Terrain Models are plying an important role as information layer, mainly with the development of geographic information systems, since they describe the topographic surface of the terrain and hence it constitutes a valuable support for the study of variety of geographical and environmental events. With the advent of digital techniques and the advantages they are offering in terms of automation and precision, users are adopting image matching techniques to derive automatically Digital Terrain Models. The quality of these DTM are determined by different factors (photo scale, scanning resolution and software parameterization). This paper is a contribution to evaluate the influence of some of some factors on the final accuracy of DTM derived by correlation. In this respect, different tests were carried out on two photo scales (1/7500 an 1/20000) flown on varying topography. The photos were scanned to 20, 25, 32 and 42 microns pixel sizes and digital terrain models were derived using ViruoZo software from Supresoft. The assessment of the derived DTMs quality was based on qualitative (visual comparisons of contours) and quantitative ( RMS computed from residuals on ground check points) criteria. Results showed that, in rugged terrain, DTM derived from 1/20000 photos are accurate to 32cm, which may enable deriving contours with 1 m interval. The introduction of break lines prior to the correlation seems to have less influence on the accuracy of derived DTM when the generated grid is very dense, but contributes to reduce the editing burden. The high accuracy of automatically derived DTM may contribute to make less tight the map to photo scale ratio. For instance mapping at 1/5000 from 1/20000 photos can preserve the height accuracy, while with conventional methods, height accuracy at 1/5000 map scale is preserved usually for mapping from 1/12000. 1

    The contribution of deep learning to the semantic segmentation of 3D point-clouds in urban areas

    Full text link
    peer reviewedSemantic segmentation in a large-scale urban environment is crucial for a deep and rigorous understanding of urban environments. The development of Lidar tools in terms of resolution and precision offers a good opportunity to satisfy the need of developing 3D city models. In this context, deep learning revolutionizes the field of computer vision and demonstrates a good performance in semantic segmentation. To achieve this objective, we propose to design a scientific methodology involving a method of deep learning by integrating several data sources (Lidar data, aerial images, etc) to recognize objects semantically and automatically. We aim at extracting automatically the maximum amount of semantic information in a urban environment with a high accuracy and performance

    Modélisation automatique des données LIDAR

    Get PDF
    Despite progress in computer vision and aerial photogrammetry, automatic reconstruction of three-dimensional scene from images or LIDAR data are one of the complex problems and also a wide field of research. Accurate and detailed 3D models of buildings have a major interest in many fields such as city planning, navigation, planning of networks telecommunication and military simulation, these models should beings powered and updated periodically. Many 3D modeling approaches have been proposed in recent decades that could beings classified according to the data used (satellite image, MNS, point cloud), the type of treatment (parametric or non-parametric) and the rate of human intervention (automatic, semi-automatic or manual). Modeling 3D information in an automated manner is an essential step for the implementation of several current applications that require a high level of LASER data interpretation. Therefore, there is a growing interest in this area of research and an extensive literature. We propose, through this paper, a study of the state of the art of different modeling approaches reported in the literature.  Keywords: Modeling, LIDAR, 3D, AutomaticMalgré les progrès réalisés en vision par ordinateur et en photogrammétrie aérienne, la reconstitution automatique de scène tridimensionnelle à partir des images ou bien des données LIDAR restent l’un des problèmes complexes et aussi un large champ de recherche. Les modèles 3D précis et détaillés des bâtiments ont un intérêt majeur dans plusieurs domaines tels que l’urbanisme, la navigation, la planification des réseaux de télécommunication et la simulation militaire, ces modèles doivent êtres périodiquement alimentés et mis à jour. De nombreuses approches de modélisation 3D ont été proposées durant ces dernières décennies qui pouvaient êtres classées en fonction de la donnée utilisée (Image satellitaire, MNS, Nuage de points), du type du traitement (paramétrique ou non paramétrique) ainsi que le taux d’intervention humaine (automatique, semi automatique ou bien manuel). La modélisation de l’information 3D d’une façon automatique est une étape primordiale pour la mise en Å“uvre de plusieurs applications actuelles qui nécessitent une interprétation de haut niveau des données LASER. Par conséquent, il existe un intérêt croissant pour ce domaine de recherche et une vaste littérature. Nous proposons, à travers cet article, une étude de l’état de l’art des différentes méthodes de modélisation proposées dans la littératures. Mots clés: Modélisation, LIDAR, 3D, Automatique.   &nbsp

    Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    No full text
    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes
    corecore