7 research outputs found
Map-Repair: Deep Cadastre Maps Alignment and Temporal Inconsistencies Fix in Satellite Images
In the fast developing countries it is hard to trace new buildings
construction or old structures destruction and, as a result, to keep the
up-to-date cadastre maps. Moreover, due to the complexity of urban regions or
inconsistency of data used for cadastre maps extraction, the errors in form of
misalignment is a common problem. In this work, we propose an end-to-end deep
learning approach which is able to solve inconsistencies between the input
intensity image and the available building footprints by correcting label
noises and, at the same time, misalignments if needed. The obtained results
demonstrate the robustness of the proposed method to even severely misaligned
examples that makes it potentially suitable for real applications, like
OpenStreetMap correction
Map-Repair: Deep Cadastre Maps Alignment and Temporal Inconsistencies Fix in Satellite Images
In the fast developing countries it is hard to trace new buildings construction or old structures destruction and, as a result, to keep the up-to-date cadastre maps. Moreover, due to the complexity of urban regions or inconsistency of data used for cadastre maps extraction, the errors in form of misalignment is a common problem. In this work, we propose an end-to-end deep learning approach which is able to solve inconsistencies between the input intensity image and the available building footprints by correcting label noises and, at the same time, misalignments if needed. The obtained results demonstrate the robustness of the proposed method to even severely misaligned examples that makes it potentially suitable for real applications, like OpenStreetMap correction
AutoCorrect: Deep Inductive Alignment of Noisy Geometric Annotations
We propose AutoCorrect, a method to automatically learn object-annotation
alignments from a dataset with annotations affected by geometric noise. The
method is based on a consistency loss that enables deep neural networks to be
trained, given only noisy annotations as input, to correct the annotations.
When some noise-free annotations are available, we show that the consistency
loss reduces to a stricter self-supervised loss. We also show that the method
can implicitly leverage object symmetries to reduce the ambiguity arising in
correcting noisy annotations. When multiple object-annotation pairs are present
in an image, we introduce a spatial memory map that allows the network to
correct annotations sequentially, one at a time, while accounting for all other
annotations in the image and corrections performed so far. Through ablation, we
show the benefit of these contributions, demonstrating excellent results on
geo-spatial imagery. Specifically, we show results using a new Railway tracks
dataset as well as the public INRIA Buildings benchmarks, achieving new
state-of-the-art results for the latter.Comment: BMVC 2019 (Spotlight
Smart City Digital Twin Framework for Real-Time Multi-Data Integration and Wide Public Distribution
Digital Twins are digital replica of real entities and are becoming
fundamental tools to monitor and control the status of entities, predict their
future evolutions, and simulate alternative scenarios to understand the impact
of changes. Thanks to the large deployment of sensors, with the increasing
information it is possible to build accurate reproductions of urban
environments including structural data and real-time information. Such
solutions help city councils and decision makers to face challenges in urban
development and improve the citizen quality of life, by ana-lysing the actual
conditions, evaluating in advance through simulations and what-if analysis the
outcomes of infrastructural or political chang-es, or predicting the effects of
humans and/or of natural events. Snap4City Smart City Digital Twin framework is
capable to respond to the requirements identified in the literature and by the
international forums. Differently from other solutions, the proposed
architecture provides an integrated solution for data gathering, indexing,
computing and information distribution offered by the Snap4City IoT platform,
therefore realizing a continuously updated Digital Twin. 3D building models,
road networks, IoT devices, WoT Entities, point of interests, routes, paths,
etc., as well as results from data analytical processes for traffic density
reconstruction, pollutant dispersion, predictions of any kind, what-if
analysis, etc., are all integrated into an accessible web interface, to support
the citizens participation in the city decision processes. What-If analysis to
let the user performs simulations and observe possible outcomes. As case of
study, the Digital Twin of the city of Florence (Italy) is presented. Snap4City
platform, is released as open-source, and made available through GitHub and as
docker compose
Aligning and Updating Cadaster Maps with Aerial Images by Multi-Task, Multi-Resolution Deep Learning
International audienceA large part of the world is already covered by maps of buildings , through projects such as OpenStreetMap. However when a new image of an already covered area is captured, it does not align perfectly with the buildings of the already existing map, due to a change of capture angle , atmospheric perturbations, human error when annotating buildings or lack of precision of the map data. Some of those deformations can be partially corrected, but not perfectly, which leads to misalignments. Additionally , new buildings can appear in the image. Leveraging multi-task learning, our deep learning model aligns the existing building polygons to the new image through a displacement output, and also detects new buildings that do not appear in the cadaster through a segmentation output. It uses multiple neural networks at successive resolutions to output a displacement field and a pixel-wise segmentation of the new buildings from coarser to finer scales. We also apply our method to buildings height estimation, by aligning cadaster data to the rooftops of stereo images. The code is available at https://github.com/Lydorn/mapalignment
Analyse comparative de l'utilisation de l'apprentissage profond sur des images satellitaires
L'analyse d'images satellites est un domaine de la géomatique permettant de nombreuses observations par rapport à la terre.
Une étape importante de toute observation est d'identifier le contenu de l'image.
Cette étape est normalement effectuée à la main, ce qui coûte temps et argent.
Avec l'avènement des réseaux de neurones profonds, des GPUs à forte capacité de calculs et du nombre croissant de données satellitaires annotées, les algorithmes apprenants sont désormais les outils les plus prometteurs pour l'analyse automatique d'images satellitaires.
Ce mémoire présente une étude préliminaire de l'application des réseaux à convolution sur des images satellites, ainsi que deux nouvelles méthodes devant permettre d'entraîner des réseaux de neurones a l'aide de données satellitaires pauvrement annotées.
Pour cela, on a utilisé deux bases de données de l'international society for photogrammetry and remote sensing comprenant 40 images étiquetées de six classes.
Les deux atouts majeurs de ces bases de données sont la grande variété des canaux composant leurs images, ainsi que les lieux différents (et donc contextes) où ces images ont été acquises.
Par la suite, nous présenterons des résultats empiriques à plusieurs questions d'ordre pratique en lien avec les performances attendues des réseaux de neurones profonds appliqués à l'imagerie satellitaire.
Vers la fin du rapport, nous présenterons deux techniques permettant de combiner plusieurs ensembles de données, et ce, grâce à des étiquettes de classes hiérarchiques