26 research outputs found

    Mise à jour d’une base de données d’occupation du sol à grande échelle en milieux naturels à partir d’une image satellite THR

    Get PDF
    Land-Cover geospatial databases (LC-BDs) are mandatory inputs for various purposes such as for natural resources monitoring land planning, and public policies management. To improve this monitoring, users look for both better geometric, and better semantic levels of detail. To fulfill such requirements, a large-scale LC-DB is being established at the French National Mapping Agency (IGN). However, to meet the users needs, this DB must be updated as regularly as possible while keeping the initial accuracies. Consequently, automatic updating methods should be set up in order to allow such large-scale computation. Furthermore, Earth observation satellites have been successfully used to the constitution of LC-DB at various scales such as Corine Land Cover (CLC). Nowadays, very high resolution (VHR) sensors, such as Pléiades satellite, allow to product large-scale LC-DB. Consequently, the purpose of this thesis is to propose an automatic updating method of such large-scale LC-DB from VHR monoscopic satellite image (to limit acquisition costs) while ensuring the robustness of the detected changes. Our proposed method is based on a multilevel supervised learning algorithm MLMOL, which allows to best take into account the possibly multiple appearances of each DB classes. This algorithm can be applied to various images and DB data sets, independently of the classifier, and the attributes extracted from the input image. Moreover, the classifications stacking improves the robustness of the method, especially on classes having multiple appearances (e.g., plowed or not plowed fields, stand-alone houses or industrial warehouse buildings, ...). In addition, the learning algorithm is integrated into a processing chain (LUPIN) allowing, first to automatically fit to the different existing DB themes and, secondly, to be robust to in-homogeneous areas. As a result, the method is successfully applied to a Pleiades image on an area near Tarbes (southern France) covered by the IGN large-scale LC-DB. Results show the contribution of Pleiades images (in terms of sub-meter resolution and spectral dynamics). Indeed, thanks to the texture and shape attributes (morphological profiles, SFS, ...), VHR satellite images give good classification results, even on classes such as roads, and buildings that usually require specific methods. Moreover, the proposed method provides relevant change indicators in the area. In addition, our method provides a significant support for the creation of LC-DB obtain by merging several existing DBs. Indeed, our method allows to take a decision when the fusion of initials DBs generates overlapping areas, particularly when such DBs come from different sources with their own specification. In addition, our method allows to fill potential gaps in the coverage of such generating DB, but also to extend the data to the coverage of a larger image. Finally, the proposed workflow is applied to different remote sensing data sets in order to assess its versatility and the relevance of such data. Results show that our method is able to deal with such different spatial resolutions data sets (Pléiades at 0.5 m, SPOT 6 at 1.5 m and RapidEye at 5 m), and to take into account the strengths of each sensor, e.g., the RapidEye red-edge channel for discrimination theme forest, the good balance of the SPOT~6 resolution for built-up areas classes and the capability of VHR of Pléiades images to discriminate objects of small spatial extent such as roads or hedge.Les base de données (BD) d'Occupation du Sol (OCS) sont d'une grande utilité, dans divers domaines. Les utilisateurs recherchent des niveaux de détails tant géométriques que sémantiques très fins. Ainsi, une telle BD d'OCS à Grande Échelle (OCS-GE) est en cours de constitution à l'IGN. Cependant, pour répondre aux besoins des utilisateurs, cette BD doit être mise à jour le plus régulièrement possible, avec une notion de millésime. Ainsi, des méthodes automatiques de mise à jour doivent être mises en place, afin de traiter rapidement des zones étendues. Par ailleurs, les satellites d'observation de la terre ont fait leurs preuves dans l'aide à la constitution de BD d'OCS à des échelles comparables à celle de CLC. Avec l'arrivée de nouveaux capteurs THR, comme celle du satellite Pléiades, la question de la pertinence de ces images pour la mise à jour de BD d'OCS-GE se pose naturellement. Ainsi, l'objet de cette thèse est de développer une méthode automatique de mise à jour de BDs d'OCS-GE, à partir d'une image satellite THR monoscopique (afin de réduire les coûts d'acquisition), tout en garantissant la robustesse des changements détectés. Le cœur de la méthode est un algorithme d'apprentissage supervisés multi-niveaux appelé MLMOL, qui permet de prendre en compte au mieux les apparences, éventuellement multiples, de chaque thème de la BD. Cet algorithme, complètement indépendant du choix du classifieur et des attributs extraits de l'image, peut être appliqué sur des jeux de données très variés. De plus, la multiplication de classifications permet d'améliorer la robustesse de la méthode, en particulier sur des thèmes ayant des apparences multiples (e,g,. champs labourés ou non, bâtiments de type maison ou hangar industriel, ...). De plus, l'algorithme d'apprentissage est intégré dans une chaîne de traitements (LUPIN) capable, d'une part de s'adapter automatiquement aux différents thèmes de la BD pouvant exister et, d'autre part, d'être robuste à l'existence de thèmes in-homogènes. Par suite, la méthode est appliquée avec succès à une image Pléiades, sur une zone à proximité de Tarbes (65) couverte par la BD OCS-GE constituée par IGN. Les résultats obtenus montrent l'apport des images Pléiades tant en terme de résolution sub-métrique que de dynamique spectrale. D'autre part, la méthode proposée permet de fournir des indicateurs pertinents de changements sur la zone. Nous montrons par ailleurs que notre méthode peut fournir une aide précieuse à la constitution de BD d'OCS issues de la fusion de différentes BDs. En effet, notre méthode a la capacité de prise de décisions lorsque la fusion de BDs génère des zones de recouvrement, phénomène courant notamment lorsque les données proviennent de différentes sources, avec leur propre spécification. De plus, notre méthode permet également de compléter d'éventuels lacunes dans la zone de couverture de la BD générée, mais aussi d'étendre cette couverture sur l'emprise d'une image couvrant une étendue plus large. Enfin, la chaîne de traitements LUPIN est appliquée à différents jeux de données de télédétection afin de valider sa polyvalence et de juger de la pertinence de ces données. Les résultats montrent sa capacité d'adaptation aux données de différentes résolutions utilisées (Pléiades à 0,5m, SPOT 6 à 1,5m et RapidEye à 5m), ainsi que sa capacité à utiliser les points forts des différents capteurs, comme par exemple le canal red-edge de RapidEye pour la discrimination du thème forêts, le bon compromis de résolution que fournit SPOT 6 pour le thème zones bâties et l'apport de la THR de Pléiades pour discriminer des thèmes précis comme les routes ou les haies

    Updating large-scale land-use database on natural environments from a VHR satellite image

    No full text
    Les base de données (BD) d'Occupation du Sol (OCS) sont d'une grande utilité, dans divers domaines. Les utilisateurs recherchent des niveaux de détails tant géométriques que sémantiques très fins. Ainsi, une telle BD d'OCS à Grande Échelle (OCS-GE) est en cours de constitution à l'IGN. Cependant, pour répondre aux besoins des utilisateurs, cette BD doit être mise à jour le plus régulièrement possible, avec une notion de millésime. Ainsi, des méthodes automatiques de mise à jour doivent être mises en place, afin de traiter rapidement des zones étendues. Par ailleurs, les satellites d'observation de la terre ont fait leurs preuves dans l'aide à la constitution de BD d'OCS à des échelles comparables à celle de CLC. Avec l'arrivée de nouveaux capteurs THR, comme celle du satellite Pléiades, la question de la pertinence de ces images pour la mise à jour de BD d'OCS-GE se pose naturellement. Ainsi, l'objet de cette thèse est de développer une méthode automatique de mise à jour de BDs d'OCS-GE, à partir d'une image satellite THR monoscopique (afin de réduire les coûts d'acquisition), tout en garantissant la robustesse des changements détectés. Le cœur de la méthode est un algorithme d'apprentissage supervisés multi-niveaux appelé MLMOL, qui permet de prendre en compte au mieux les apparences, éventuellement multiples, de chaque thème de la BD. Cet algorithme, complètement indépendant du choix du classifieur et des attributs extraits de l'image, peut être appliqué sur des jeux de données très variés. De plus, la multiplication de classifications permet d'améliorer la robustesse de la méthode, en particulier sur des thèmes ayant des apparences multiples (e,g,. champs labourés ou non, bâtiments de type maison ou hangar industriel, ...). De plus, l'algorithme d'apprentissage est intégré dans une chaîne de traitements (LUPIN) capable, d'une part de s'adapter automatiquement aux différents thèmes de la BD pouvant exister et, d'autre part, d'être robuste à l'existence de thèmes in-homogènes. Par suite, la méthode est appliquée avec succès à une image Pléiades, sur une zone à proximité de Tarbes (65) couverte par la BD OCS-GE constituée par IGN. Les résultats obtenus montrent l'apport des images Pléiades tant en terme de résolution sub-métrique que de dynamique spectrale. D'autre part, la méthode proposée permet de fournir des indicateurs pertinents de changements sur la zone. Nous montrons par ailleurs que notre méthode peut fournir une aide précieuse à la constitution de BD d'OCS issues de la fusion de différentes BDs. En effet, notre méthode a la capacité de prise de décisions lorsque la fusion de BDs génère des zones de recouvrement, phénomène courant notamment lorsque les données proviennent de différentes sources, avec leur propre spécification. De plus, notre méthode permet également de compléter d'éventuels lacunes dans la zone de couverture de la BD générée, mais aussi d'étendre cette couverture sur l'emprise d'une image couvrant une étendue plus large. Enfin, la chaîne de traitements LUPIN est appliquée à différents jeux de données de télédétection afin de valider sa polyvalence et de juger de la pertinence de ces données. Les résultats montrent sa capacité d'adaptation aux données de différentes résolutions utilisées (Pléiades à 0,5m, SPOT 6 à 1,5m et RapidEye à 5m), ainsi que sa capacité à utiliser les points forts des différents capteurs, comme par exemple le canal red-edge de RapidEye pour la discrimination du thème forêts, le bon compromis de résolution que fournit SPOT 6 pour le thème zones bâties et l'apport de la THR de Pléiades pour discriminer des thèmes précis comme les routes ou les haies.Land-Cover geospatial databases (LC-BDs) are mandatory inputs for various purposes such as for natural resources monitoring land planning, and public policies management. To improve this monitoring, users look for both better geometric, and better semantic levels of detail. To fulfill such requirements, a large-scale LC-DB is being established at the French National Mapping Agency (IGN). However, to meet the users needs, this DB must be updated as regularly as possible while keeping the initial accuracies. Consequently, automatic updating methods should be set up in order to allow such large-scale computation. Furthermore, Earth observation satellites have been successfully used to the constitution of LC-DB at various scales such as Corine Land Cover (CLC). Nowadays, very high resolution (VHR) sensors, such as Pléiades satellite, allow to product large-scale LC-DB. Consequently, the purpose of this thesis is to propose an automatic updating method of such large-scale LC-DB from VHR monoscopic satellite image (to limit acquisition costs) while ensuring the robustness of the detected changes. Our proposed method is based on a multilevel supervised learning algorithm MLMOL, which allows to best take into account the possibly multiple appearances of each DB classes. This algorithm can be applied to various images and DB data sets, independently of the classifier, and the attributes extracted from the input image. Moreover, the classifications stacking improves the robustness of the method, especially on classes having multiple appearances (e.g., plowed or not plowed fields, stand-alone houses or industrial warehouse buildings, ...). In addition, the learning algorithm is integrated into a processing chain (LUPIN) allowing, first to automatically fit to the different existing DB themes and, secondly, to be robust to in-homogeneous areas. As a result, the method is successfully applied to a Pleiades image on an area near Tarbes (southern France) covered by the IGN large-scale LC-DB. Results show the contribution of Pleiades images (in terms of sub-meter resolution and spectral dynamics). Indeed, thanks to the texture and shape attributes (morphological profiles, SFS, ...), VHR satellite images give good classification results, even on classes such as roads, and buildings that usually require specific methods. Moreover, the proposed method provides relevant change indicators in the area. In addition, our method provides a significant support for the creation of LC-DB obtain by merging several existing DBs. Indeed, our method allows to take a decision when the fusion of initials DBs generates overlapping areas, particularly when such DBs come from different sources with their own specification. In addition, our method allows to fill potential gaps in the coverage of such generating DB, but also to extend the data to the coverage of a larger image. Finally, the proposed workflow is applied to different remote sensing data sets in order to assess its versatility and the relevance of such data. Results show that our method is able to deal with such different spatial resolutions data sets (Pléiades at 0.5 m, SPOT 6 at 1.5 m and RapidEye at 5 m), and to take into account the strengths of each sensor, e.g., the RapidEye red-edge channel for discrimination theme forest, the good balance of the SPOT~6 resolution for built-up areas classes and the capability of VHR of Pléiades images to discriminate objects of small spatial extent such as roads or hedge

    Geolocation of a panoramic camera by reference pairing

    No full text
    Panoramic cameras are now available to a large audience. They provide good results on photogrammetry application, but they are still limited by their positioning. This project aims to geolocate a commercial 360° camera in an urban environment, by extracting points in fisheye images and match them with reference from a LiDAR (Light Detection and Ranging) dataset. Such reference points are located on the horizon line, visible from the camera point of view. Matching points are then introduced as Ground Control Points to improve the camera positioning accuracy. A fully automatic solution for position refinement, based on LiDAR data is proposed in this paper

    Automating the underground cadastral survey ::a processing chain proposal

    No full text
    In order to ensure the proper functioning and evolution of underground networks (water, gas, etc.) over time, municipal services need to maintain accurate and up-to-date maps. Such maps are generally updated using traditional data acquisition methods (total station or GNSS), which are time-consuming, expensive, and require several teams of surveyors in the field. In this context, an important topic of research is the automation of the updating of the underground cadastre in order to save time, money, and human effort. In this paper, we present a new method that we developed ranging from the choice of the acquisition system, the tests carried out in the field to the detection of objects and the automatic segmentation in a 3D point cloud. We have chosen to use a convolutional neural network on images for the detection of objects that are part of the underground cadastre. As the next step, objects are projected to obtain a 3D point cloud segmented based on the object type. The vectorization step is still under development so that objects can be converted to vector format and therefore be used for updating the cadastre. The results based on excavation sites with well-represented objects in our training database are excellent, approaching 96% accuracy. However, the detection of rare objects is much less good and thus remains a topic for future research. Overall, the complete processing chain allowing to automate as much as possible the update of an underground cadastre is presented in this paper

    Toward a low-cost, multispectral, high accuracy mapping system for vineyard inspection

    No full text
    Confronted with the climate change challenge and the territorial constraints, agriculture has to modernize itself. The use of georeferenced data and remote-sensing imagery is a major step in this direction. This precision mapping of crops requires powerful and accurate acquisition systems, while remaining financially attractive. The development of multispectral sensors and low-cost GNSS makes it possible to consider systems that will be able to map at the plant scale. However, these positioning systems do not yet guarantee a precise overlap of data acquired at different times. Thus, we propose in this paper a method to register terrestrial image data, acquired on vineyard plots. Our method seeks to avoid image registration problems, such as illumination changes, by detecting the vine stocks, reconstructing them in 3D, and registering them individually. The 3D detection method is based on an image-based object detection method (Faster R-CNN) and a structure-from-motion reconstruction of object-masked images. The results that we obtained on a vineyard plot, allowed us to validate the method, with a precision of less than 10 cm, making it possible to map the vine by stock

    About photogrammetric uav-mapping ::which accuracy for which application?

    No full text
    UAV surveys have become more and more popular over the last few years, driven by manufacturers and software suppliers who promise high accuracy at low cost. But, what are the real possibilities offered by this kind of sensor? In this article, we investigate in detail the possibilities offered by photogrammetric UAV mapping solutions through numerous practical experiments and compare them to a reference high grade LiDAR-Photogrammetric acquisition. This paper first focuses on aerial triangulation and dense matching accuracy comparison of different data acquisition units (2 types of camera) and processing softwares (1 open source and 2 proprietary softwares). Finally, the opportunities offered by these different approaches are studied in detail on standard aerial applications such as power lines detection, forest and urban areas mapping, in comparison with our reference dataset

    Automating image labeling for remote sensing using cadastral database and video game engine simulation

    No full text
    In the remote sensing field, utilization of deep learning algorithms, such as Convolutional Neural Networks (CNNs) for automated detection is a commonly adopted approach, as reported by [1]. These techniques have demonstrated significant power and efficacy, largely due to the availability of increasingly large datasets and the rapid advancement in computing technology. However, the preparation of these datasets necessitates a substantial amount of manual labor, which is often outsourced to cost-efficient labor forces. In this paper, we present two methods developed to automate the labeling work for semantic segmentation and object detection tasks. We will analyze the results in terms of accuracy and time saved, and show how we've successfully applied them to two real-life projects

    CIMEMountainBot ::a telegram bot to collect mountain images and to communicate information with mountain guides

    No full text
    Advancements in technology have led to an increase in the number of Volunteer Geographic Information (VGI) applications, and new smartphone functionalities have made collecting VGI data easier. However, getting volunteers to install and use new VGI applications can be challenging. This article introduces a possible solution by using existing applications, that people use on a daily basis, for VGI data collection. Accordingly, a prototype of a Telegram chatbot is developed to collect mountain images from volunteers, while also providing them with information such as weather conditions and avalanche risk in a given location. The article concludes that using existing platforms like Telegram has benefits, but it is important to consider the specific goals, participants’ needs, and interface of a project, and strikes a balance between creating a new application and using existing ones
    corecore