238 research outputs found

    Supervised / unsupervised change detection

    Get PDF
    The aim of this deliverable is to provide an overview of the state of the art in change detection techniques and a critique of what could be programmed to derive SENSUM products. It is the product of the collaboration between UCAM and EUCENTRE. The document includes as a necessary requirement a discussion about a proposed technique for co-registration. Since change detection techniques require an assessment of a series of images and the basic process involves comparing and contrasting the similarities and differences to essentially spot changes, co-registration is the first step. This ensures that the user is comparing like for like. The developed programs would then be used on remotely sensed images for applications in vulnerability assessment and post-disaster recovery assessment and monitoring. One key criterion is to develop semi-automated and automated techniques. A series of available techniques are presented along with the advantages and disadvantages of each method. The descriptions of the implemented methods are included in the deliverable D2.7 ”Software Package SW2.3”. In reviewing the available change detection techniques, the focus was on ways to exploit medium resolution imagery such as Landsat due to its free-to-use license and since there is a rich historical coverage arising from this satellite series. Regarding the change detection techniques with high resolution images, this was also examined and a recovery specific change detection index is discussed in the report

    Understanding High Resolution Aerial Imagery Using Computer Vision Techniques

    Get PDF
    Computer vision can make important contributions to the analysis of remote sensing satellite or aerial imagery. However, the resolution of early satellite imagery was not sufficient to provide useful spatial features. The situation is changing with the advent of very-high-spatial-resolution (VHR) imaging sensors. This change makes it possible to use computer vision techniques to perform analysis of man-made structures. Meanwhile, the development of multi-view imaging techniques allows the generation of accurate point clouds as ancillary knowledge. This dissertation aims at developing computer vision and machine learning algorithms for high resolution aerial imagery analysis in the context of application problems including debris detection, building detection and roof condition assessment. High resolution aerial imagery and point clouds were provided by Pictometry International for this study. Debris detection after natural disasters such as tornadoes, hurricanes or tsunamis, is needed for effective debris removal and allocation of limited resources. Significant advances in aerial image acquisition have greatly enabled the possibilities for rapid and automated detection of debris. In this dissertation, a robust debris detection algorithm is proposed. Large scale aerial images are partitioned into homogeneous regions by interactive segmentation. Debris areas are identified based on extracted texture features. Robust building detection is another important part of high resolution aerial imagery understanding. This dissertation develops a 3D scene classification algorithm for building detection using point clouds derived from multi-view imagery. Point clouds are divided into point clusters using Euclidean clustering. Individual point clusters are identified based on extracted spectral and 3D structural features. The inspection of roof condition is an important step in damage claim processing in the insurance industry. Automated roof condition assessment from remotely sensed images is proposed in this dissertation. Initially, texture classification and a bag-of-words model were applied to assess the roof condition using features derived from the whole rooftop. However, considering the complexity of residential rooftop, a more sophisticated method is proposed to divide the task into two stages: 1) roof segmentation, followed by 2) classification of segmented roof regions. Deep learning techniques are investigated for both segmentation and classification. A deep learned feature is proposed and applied in a region merging segmentation algorithm. A fine-tuned deep network is adopted for roof segment classification and found to achieve higher accuracy than traditional methods using hand-crafted features. Contributions of this study include the development of algorithms for debris detection using 2D images and building detection using 3D point clouds. For roof condition assessment, the solutions to this problem are explored in two directions: features derived from the whole rooftop and features extracted from each roof segments. Through our research, roof segmentation followed by segments classification was found to be a more promising method and the workflow processing developed and tested. Deep learning techniques are also investigated for both roof segmentation and segments classification. More unsupervised feature extraction techniques using deep learning can be explored in future work

    A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends

    Get PDF
    Computer vision (CV) is a big and important field in artificial intelligence covering a wide range of applications. Image analysis is a major task in CV aiming to extract, analyse and understand the visual content of images. However, imagerelated tasks are very challenging due to many factors, e.g., high variations across images, high dimensionality, domain expertise requirement, and image distortions. Evolutionary computation (EC) approaches have been widely used for image analysis with significant achievement. However, there is no comprehensive survey of existing EC approaches to image analysis. To fill this gap, this paper provides a comprehensive survey covering all essential EC approaches to important image analysis tasks including edge detection, image segmentation, image feature analysis, image classification, object detection, and others. This survey aims to provide a better understanding of evolutionary computer vision (ECV) by discussing the contributions of different approaches and exploring how and why EC is used for CV and image analysis. The applications, challenges, issues, and trends associated to this research field are also discussed and summarised to provide further guidelines and opportunities for future research

    Mapping regional land cover and land use change using MODIS time series

    Full text link
    Coarse resolution satellite observations of the Earth provide critical data in support of land cover and land use monitoring at regional to global scales. This dissertation focuses on methodology and dataset development that exploit multi-temporal data from the Moderate Resolution Imaging Spectroradiometer (MODIS) to improve current information related to regional forest cover change and urban extent. In the first element of this dissertation, I develop a novel distance metric-based change detection method to map annual forest cover change at 500m spatial resolution. Evaluations based on a global network of test sites and two regional case studies in Brazil and the United States demonstrate the efficiency and effectiveness of this methodology, where estimated changes in forest cover are comparable to reference data derived from higher spatial resolution data sources. In the second element of this dissertation, I develop methods to estimate fractional urban cover for temperate and tropical regions of China at 250m spatial resolution by fusing MODIS data with nighttime lights using the Random Forest regression algorithm. Assessment of results for 9 cities in Eastern, Central, and Southern China show good agreement between the estimated urban percentages from MODIS and reference urban percentages derived from higher resolution Landsat data. In the final element of this dissertation, I assess the capability of a new nighttime lights dataset from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day/Night Band (DNB) for urban mapping applications. This dataset provides higher spatial resolution and improved radiometric quality in nighttime lights observations relative to previous datasets. Analyses for a study area in the Yangtze River Delta in China show that this new source of data significantly improves representation of urban areas, and that fractional urban estimation based on DNB can be further improved by fusion with MODIS data. Overall, the research in this dissertation contributes new methods and understanding for remote sensing-based change detection methodologies. The results suggest that land cover change products from coarse spatial resolution sensors such as MODIS and VIIRS can benefit from regional optimization, and that urban extent mapping from nighttime lights should exploit complementary information from conventional visible and near infrared observations

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    A methodology to produce geographical information for land planning using very-high resolution images

    Get PDF
    Actualmente, os municĂ­pios sĂŁo obrigados a produzir, no Ăąmbito da elaboração dos instrumentos de gestĂŁo territorial, cartografia homologada pela autoridade nacional. O Plano Director Municipal (PDM) tem um perĂ­odo de vigĂȘncia de 10 anos. PorĂ©m, no que diz respeito Ă  cartografia para estes planos, principalmente em municĂ­pios onde a pressĂŁo urbanĂ­stica Ă© elevada, esta periodicidade nĂŁo Ă© compatĂ­vel com a dinĂąmica de alteração de uso do solo. Emerge assim, a necessidade de um processo de produção mais eficaz, que permita a obtenção de uma nova cartografia de base e temĂĄtica mais frequentemente. Em Portugal recorre-se Ă  fotografia aĂ©rea como informação de base para a produção de cartografia de grande escala. Por um lado, embora este suporte de informação resulte em mapas bastante rigorosos e detalhados, a sua produção tĂȘm custos muito elevados e consomem muito tempo. As imagens de satĂ©lite de muito alta-resolução espacial podem constituir uma alternativa, mas sem substituir as fotografias aĂ©reas na produção de cartografia temĂĄtica, a grande escala. O tema da tese trata assim da satisfação das necessidades municipais em informação geogrĂĄfica actualizada. Para melhor conhecer o valor e utilidade desta informação, realizou-se um inquĂ©rito aos municĂ­pios Portugueses. Este passo foi essencial para avaliar a pertinĂȘncia e a utilidade da introdução de imagens de satĂ©lite de muito alta-resolução espacial na cadeia de procedimentos de actualização de alguns temas, quer na cartografia de base quer na cartografia temĂĄtica. A abordagem proposta para solução do problema identificado baseia-se no uso de imagens de satĂ©lite e outros dados digitais em ambiente de Sistemas de Informação GeogrĂĄfica. A experimentação teve como objectivo a extracção automĂĄtica de elementos de interesse municipal a partir de imagens de muito alta-resolução espacial (fotografias aĂ©reas ortorectificadas, imagem QuickBird, e imagem IKONOS), bem como de dados altimĂ©tricos (dados LiDAR). Avaliaram-se as potencialidades da informação geogrĂĄfica extraĂ­das das imagens para fins cartogrĂĄficos e analĂ­ticos. Desenvolveram-se quatro casos de estudo que reflectem diferentes usos para os dados geogrĂĄficos a nĂ­vel municipal, e que traduzem aplicaçÔes com exigĂȘncias diferentes. No primeiro caso de estudo, propĂ”e-se uma metodologia para actualização periĂłdica de cartografia a grande escala, que faz uso de fotografias aĂ©reas vi ortorectificadas na ĂĄrea da Alta de Lisboa. Esta Ă© uma aplicação quantitativa onde as qualidades posicionais e geomĂ©tricas dos elementos extraĂ­dos sĂŁo mais exigentes. No segundo caso de estudo, criou-se um sistema de alarme para ĂĄreas potencialmente alteradas, com recurso a uma imagem QuickBird e dados LiDAR, no Bairro da Madre de Deus, com objectivo de auxiliar a actualização de cartografia de grande escala. No terceiro caso de estudo avaliou-se o potencial solar de topos de edifĂ­cios nas Avenidas Novas, com recurso a dados LiDAR. No quarto caso de estudo, propĂ”e-se uma sĂ©rie de indicadores municipais de monitorização territorial, obtidos pelo processamento de uma imagem IKONOS que cobre toda a ĂĄrea do concelho de Lisboa. Esta Ă© uma aplicação com fins analĂ­ticos onde a qualidade temĂĄtica da extracção Ă© mais relevante.Currently, the Portuguese municipalities are required to produce homologated cartography, under the Territorial Management Instruments framework. The Municipal Master Plan (PDM) has to be revised every 10 years, as well as the topographic and thematic maps that describe the municipal territory. However, this period is inadequate for representing counties where urban pressure is high, and where the changes in the land use are very dynamic. Consequently, emerges the need for a more efficient mapping process, allowing obtaining recent geographic information more often. Several countries, including Portugal, continue to use aerial photography for large-scale mapping. Although this data enables highly accurate maps, its acquisition and visual interpretation are very costly and time consuming. Very-High Resolution (VHR) satellite imagery can be an alternative data source, without replacing the aerial images, for producing large-scale thematic cartography. The focus of the thesis is the demand for updated geographic information in the land planning process. To better understand the value and usefulness of this information, a survey of all Portuguese municipalities was carried out. This step was essential for assessing the relevance and usefulness of the introduction of VHR satellite imagery in the chain of procedures for updating land information. The proposed methodology is based on the use of VHR satellite imagery, and other digital data, in a Geographic Information Systems (GIS) environment. Different algorithms for feature extraction that take into account the variation in texture, color and shape of objects in the image, were tested. The trials aimed for automatic extraction of features of municipal interest, based on aerial and satellite high-resolution (orthophotos, QuickBird and IKONOS imagery) as well as elevation data (altimetric information and LiDAR data). To evaluate the potential of geographic information extracted from VHR images, two areas of application were identified: mapping and analytical purposes. Four case studies that reflect different uses of geographic data at the municipal level, with different accuracy requirements, were considered. The first case study presents a methodology for periodic updating of large-scale maps based on orthophotos, in the area of Alta de Lisboa. This is a situation where the positional and geometric accuracy of the extracted information are more demanding, since technical mapping standards must be complied. In the second case study, an alarm system that indicates the location of potential changes in building areas, using a QuickBird image and LiDAR data, was developed for the area of Bairro da Madre de Deus. The goal of the system is to assist the updating of large scale mapping, providing a layer that can be used by the municipal technicians as the basis for manual editing. In the third case study, the analysis of the most suitable roof-tops for installing solar systems, using LiDAR data, was performed in the area of Avenidas Novas. A set of urban environment indicators obtained from VHR imagery is presented. The concept is demonstrated for the entire city of Lisbon, through IKONOS imagery processing. In this analytical application, the positional quality issue of extraction is less relevant.GEOSAT – Methodologies to extract large scale GEOgraphical information from very high resolution SATellite images (PTDC/GEO/64826/2006), e-GEO – Centro de Estudos de Geografia e Planeamento Regional, da Faculdade de CiĂȘncias Sociais e Humanas, no quadro do Grupo de Investigação Modelação GeogrĂĄfica, Cidades e Ordenamento do TerritĂłri

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Merging digital surface models sourced from multi-satellite imagery and their consequent application in automating 3D building modelling

    Get PDF
    Recently, especially within the last two decades, the demand for DSMs (Digital Surface Models) and 3D city models has increased dramatically. This has arisen due to the emergence of new applications beyond construction or analysis and consequently to a focus on accuracy and the cost. This thesis addresses two linked subjects: first improving the quality of the DSM by merging different source DSMs using a Bayesian approach; and second, extracting building footprints using approaches, including Bayesian approaches, and producing 3D models. Regarding the first topic, a probabilistic model has been generated based on the Bayesian approach in order to merge different source DSMs from different sensors. The Bayesian approach is specified to be ideal in the case when the data is limited and this can consequently be compensated by introducing the a priori. The implemented prior is based on the hypothesis that the building roof outlines are specified to be smooth, for that reason local entropy has been implemented in order to infer the a priori data. In addition to the a priori estimation, the quality of the DSMs is obtained by using field checkpoints from differential GNSS. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the Maximum Likelihood model which showed similar quantitative statistical results and better qualitative results. Perhaps it is worth mentioning that, although the DSMs used in the merging have been produced using satellite images, the model can be applied on any type of DSM. The second topic is building footprint extraction based on using satellite imagery. An efficient flow-line for automatic building footprint extraction and 3D model construction, from both stereo panchromatic and multispectral satellite imagery was developed. This flow-line has been applied in an area of different building types, with both hipped and sloped roofs. The flow line consisted of multi stages. First, data preparation, digital orthoimagery and DSMs are created from WorldView-1. Pleiades imagery is used to create a vegetation mask. The orthoimagery then undergoes binary classification into ‘foreground’ (including buildings, shadows, open-water, roads and trees) and ‘background’ (including grass, bare soil, and clay). From the foreground class, shadows and open water are removed after creating a shadow mask by thresholding the same orthoimagery. Likewise roads have been removed, for the time being, after interactively creating a mask using the orthoimagery. NDVI processing of the Pleiades imagery has been used to create a mask for removing the trees. An ‘edge map’ is produced using Canny edge detection to define the exact building boundary outlines, from enhanced orthoimagery. A normalised digital surface model (nDSM) is produced from the original DSM using smoothing and subtracting techniques. Second, start Building Detection and Extraction. Buildings can be detected, in part, in the nDSM as isolated relatively elevated ‘blobs’. These nDSM ‘blobs’ are uniquely labelled to identify rudimentary buildings. Each ‘blob’ is paired with its corresponding ‘foreground’ area from the orthoimagery. Each ‘foreground’ area is used as an initial building boundary, which is then vectorised and simplified. Some unnecessary details in the ‘edge map’, particularly on the roofs of the buildings can be removed using mathematical morphology. Some building edges are not detected in the ‘edge map’ due to low contrast in some parts of the orthoimagery. The ‘edge map’ is subsequently further improved also using mathematical morphology, leading to the ‘modified edge map’. Finally, A Bayesian approach is used to find the most probable coordinates of the building footprints, based on the ‘modified edge map’. The proposal that is made for the footprint a priori data is based on the creating a PDF which assumes that the probable footprint angle at the corner is 90o and along the edge is 180o, with a less probable value given to the other angles such as 45o and 135o. The 3D model is constructed by extracting the elevation of the buildings from the DSM and combining it with the regularized building boundary. Validation, both quantitatively and qualitatively has shown that the developed process and associated algorithms have successfully been able to extract building footprints and create 3D models

    Road Information Extraction from Mobile LiDAR Point Clouds using Deep Neural Networks

    Get PDF
    Urban roads, as one of the essential transportation infrastructures, provide considerable motivations for rapid urban sprawl and bring notable economic and social benefits. Accurate and efficient extraction of road information plays a significant role in the development of autonomous vehicles (AVs) and high-definition (HD) maps. Mobile laser scanning (MLS) systems have been widely used for many transportation-related studies and applications in road inventory, including road object detection, pavement inspection, road marking segmentation and classification, and road boundary extraction, benefiting from their large-scale data coverage, high surveying flexibility, high measurement accuracy, and reduced weather sensitivity. Road information from MLS point clouds is significant for road infrastructure planning and maintenance, and have an important impact on transportation-related policymaking, driving behaviour regulation, and traffic efficiency enhancement. Compared to the existing threshold-based and rule-based road information extraction methods, deep learning methods have demonstrated superior performance in 3D road object segmentation and classification tasks. However, three main challenges remain that impede deep learning methods for precisely and robustly extracting road information from MLS point clouds. (1) Point clouds obtained from MLS systems are always in large-volume and irregular formats, which has presented significant challenges for managing and processing such massive unstructured points. (2) Variations in point density and intensity are inevitable because of the profiling scanning mechanism of MLS systems. (3) Due to occlusions and the limited scanning range of onboard sensors, some road objects are incomplete, which considerably degrades the performance of threshold-based methods to extract road information. To deal with these challenges, this doctoral thesis proposes several deep neural networks that encode inherent point cloud features and extract road information. These novel deep learning models have been tested by several datasets to deliver robust and accurate road information extraction results compared to state-of-the-art deep learning methods in complex urban environments. First, an end-to-end feature extraction framework for 3D point cloud segmentation is proposed using dynamic point-wise convolutional operations at multiple scales. This framework is less sensitive to data distribution and computational power. Second, a capsule-based deep learning framework to extract and classify road markings is developed to update road information and support HD maps. It demonstrates the practical application of combining capsule networks with hierarchical feature encodings of georeferenced feature images. Third, a novel deep learning framework for road boundary completion is developed using MLS point clouds and satellite imagery, based on the U-shaped network and the conditional deep convolutional generative adversarial network (c-DCGAN). Empirical evidence obtained from experiments compared with state-of-the-art methods demonstrates the superior performance of the proposed models in road object semantic segmentation, road marking extraction and classification, and road boundary completion tasks

    Deep Learning for Building Footprint Generation from Optical Imagery

    Get PDF
    Auf Deep Learning basierende Methoden haben vielversprechende Ergebnisse fĂŒr die Aufgabe der Erstellung von GebĂ€udegrundrissen gezeigt, aber sie haben zwei inhĂ€rente EinschrĂ€nkungen. Erstens zeigen die extrahierten GebĂ€ude verschwommene GebĂ€udegrenzen und Klecksformen. Zweitens sind fĂŒr das Netzwerktraining massive Annotationen auf Pixelebene erforderlich. Diese Dissertation hat eine Reihe von Methoden entwickelt, um die oben genannten Probleme anzugehen. DarĂŒber hinaus werden die entwickelten Methoden in praktische Anwendungen umgesetzt
    • 

    corecore