46 research outputs found

    An Approach Of Automatic Reconstruction Of Building Models For Virtual Cities From Open Resources

    Get PDF
    Along with the ever-increasing popularity of virtual reality technology in recent years, 3D city models have been used in different applications, such as urban planning, disaster management, tourism, entertainment, and video games. Currently, those models are mainly reconstructed from access-restricted data sources such as LiDAR point clouds, airborne images, satellite images, and UAV (uncrewed air vehicle) images with a focus on structural illustration of buildings’ contours and layouts. To help make 3D models closer to their real-life counterparts, this thesis research proposes a new approach for the automatic reconstruction of building models from open resources. In this approach, first, building shapes are reconstructed by using the structural and geographic information retrievable from the open repository of OpenStreetMap (OSM). Later, images available from the street view of Google maps are used to extract information of the exterior appearance of buildings for texture mapping onto their boundaries. The constructed 3D environment is used as prior knowledge for the navigation purposes in a self-driving car. The static objects from the 3D model are compared with the real-time images of static objects to reduce the computation time by eliminating them from the detection proces

    Developing a 3D geometry for Urban energy modelling of Indian cities

    Get PDF
    The advancement in the field of Urban Building Energy Modelling (UBEM) is assisting urban planners and managers to design and operate cities to meet environmental emission targets. The usefulness of the UBEM depends upon the quality and level of details (LoD) of the inputs to the model. The inadequacy and quality of relevant input data pose challenges. This paper analyses the usefulness of different methodologies for developing a 3D building stock model of Ahmedabad, India, recognizing data gaps and heterogenous development of the city over time. It evaluates the potentials, limitations, and challenges of remote sensing techniques namely (a) Satellite imagery (b) LiDAR and (c) Photogrammetry for this application. Further, the details and benefits of data capturing through UAV assisted Photogrammetry technique for the development of the 3D city model are discussed. The research develops potential techniques for feature detection and model reconstruction using Computer vision on the Photogrammetry reality mesh. Preliminary results indicate that the use of supervised learning for Image based segmentation on the reality mesh detects building footprints with higher accuracy as compared to geometrybased segmentation of the point cloud. This methodology has the potential to detect complex building features and remove redundant objects to develop the semantic model at different LoDs for urban simulations. The framework deployed and demonstrated for the part of Ahmedabad has a potential for scaling up to other parts of the city and other Indian cities having similar urban morphology and no previous data for developing a UBEM

    Aprendizado de máquina aplicado a dados geográficos abertos

    Get PDF
    Orientador: Alexandre Xavier FalcãoTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Dados geográficos são utilizados em várias aplicações, tais como mapeamento, navegação e planificação urbana. Em particular, serviços de mapeamento são frequentemente utilizados e requerem informação geográfica atualizada. No entanto, devido a limitações orçamentárias, mapas oficiais (e.g. governamentias) sofrem de imprecisões temporais e de completude. Neste contexto projetos crowdsourcing, assim como os sistemas de informação geográfica voluntária, surgiram como uma alternativa para obter dados geográficos atualizados. OpenStreetMap (OSM) é um dos maiores projetos desse tipo com milhões de usuários (consumidores e produtores de informação) em todo o mundo e os dados coletados pelo OSM estão disponíveis gratuitamente. Uma desvantagem do OSM é o fato de poder ser editado por voluntários com diferentes habilidades de anotação, o que torna a qualidade das anotações heterogêneas em diferentes regiões geográficas. Apesar desse problema de qualidade, os dados do OSM têm sido amplamente utilizados em várias aplicações, como por exemplo no mapeamento de uso da terra. Por outro lado, é crucial melhorar a qualidade dos dados em OSM de forma que as aplicações que dependam de informações precisas, por exemplo, roteamento de carros, se tornem mais eficazes. Nesta tese, revisamos e propomos métodos baseados em aprendizado de máquina para melhorar a qualidade dos dados em OSM. Apresentamos métodos automáticos e interativos focados na melhoria dos dados em OSM para fins humanitários. Os métodos apresentados podem corrigir as anotações do OSM de edifícios em áreas rurais e permitem realizar a anotação eficiente de coqueiros a partir de imagens aéreas. O primeiro é útil na resposta a crises que afetam áreas vulneráveis, enquanto que o último é útil para monitoramento ambiental e avaliação pós-desastre. Nossa metodologia para correção automática das anotações de prédios rurais existentes em OSM consiste em três tarefas: correção de alinhamento, remoção de anotações incorretas e adição de anotações ausentes de construções. Esta metodologia obtém melhores resultados do que os métodos de segmentação semântica supervisionados e, mais importante, produz resultados vetoriais adequados para o processamento de dados geográficos. Dado que esta estratégia automática poderia não alcançar resultados precisos em algumas regiões, propomos uma abordagem interativa que reduz os esforços de humanos ao corrigir anotações de prédios rurais. Essa estratégia reduz drasticamente a quantidade de dados que os usuários precisam analisar, encontrando automaticamente a maioria dos erros de anotação existentes. A anotação de objetos de imagens aéreas é uma tarefa demorada, especialmente quando o número de objetos é grande. Assim, propomos uma metodologia na qual o processo de anotação é realizado em um espaço 2D, obtido da projeção do espaço de características das imagens. Esse método permite anotar com eficiência mais objetos do que o método tradicional de fotointerpretação, coletando amostras rotuladas mais eficazes para treinar um classificador para detecção de objetosAbstract: Geographical data are used in several applications, such as mapping, navigation, and urban planning. Particularly, mapping services are routinely used and require up-to-date geographical data. However, due to budget limitations, authoritative maps suffer from completeness and temporal inaccuracies. In this context, crowdsourcing projects, such as Volunteer Geographical Information (VGI) systems, have emerged as an alternative to obtain up-to-date geographical data. OpenStreetMap (OSM) is one of the largest VGI projects with millions of users (consumers and producers of information) around the world and the collected data in OSM are freely available. OSM is edited by volunteers with different annotation skills, which makes the annotation quality heterogeneous in different geographical regions. Despite these quality issues, OSM data have been extensively used in several applications (e.g., landuse mapping). On the other hand, it is crucial to improve the quality of the data in OSM such that applications that depend on accurate information become more effective (e.g., car routing). In this thesis, we review and propose methods based on machine learning to improve the quality of the data in OSM. We present automatic and interactive methods focused on improving OSM data for humanitarian purposes. The methods can correct the OSM annotations of building footprints in rural areas and can provide efficient annotation of coconut trees from aerial images. The former is helpful in the response to crises that affect vulnerable areas, while the later is useful for environmental monitoring and post-disaster assessment. Our methodology for automatic correction of the existing OSM annotations of rural buildings consists of three tasks: alignment correction, removal of incorrect annotations, and addition of missing building annotations. This methodology obtains better results than supervised semantic segmentation methods and, more importantly, it outputs vectorial footprints suitable for geographical data processing. Given that this automatic strategy could not attain accurate results in some regions, we propose an interactive approach which reduces the human efforts when correcting rural building annotations in OSM. This strategy drastically reduces the amount of data that the users need to analyze by automatically finding most of the existing annotation errors. The annotation of objects from aerial imagery is a time-consuming task, especially when the number of objects is high. Thus, we propose a methodology in which the annotation process is performed in a 2D space of projected image features. This method allows to efficiently annotate more objects than using traditional photointerpretation, collecting more effective labeled samples to train a classifier for object detectionDoutoradoCiência da ComputaçãoDoutor em Ciência da Computação2016/14760-5 , 2017/10086-0CAPESFAPES

    Structural Building Damage Detection with Deep Learning: Assessment of a State-of-the-Art CNN in Operational Conditions

    Get PDF
    Remotely sensed data can provide the basis for timely and efficient building damage maps that are of fundamental importance to support the response activities following disaster events. However, the generation of these maps continues to be mainly based on the manual extraction of relevant information in operational frameworks. Considering the identification of visible structural damages caused by earthquakes and explosions, several recent works have shown that Convolutional Neural Networks (CNN) outperform traditional methods. However, the limited availability of publicly available image datasets depicting structural disaster damages, and the wide variety of sensors and spatial resolution used for these acquisitions (from space, aerial and UAV platforms), have limited the clarity of how these networks can effectively serve First Responder needs and emergency mapping service requirements. In this paper, an advanced CNN for visible structural damage detection is tested to shed some light on what deep learning networks can currently deliver, and its adoption in realistic operational conditions after earthquakes and explosions is critically discussed. The heterogeneous and large datasets collected by the authors covering different locations, spatial resolutions and platforms were used to assess the network performances in terms of transfer learning with specific regard to geographical transferability of the trained network to imagery acquired in different locations. The computational time needed to deliver these maps is also assessed. Results show that quality metrics are influenced by the composition of training samples used in the network. To promote their wider use, three pre-trained networks—optimized for satellite, airborne and UAV image spatial resolutions and viewing angles—are made freely available to the scientific community

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor
    corecore