942 research outputs found

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    Estimating Solar Energy Production in Urban Areas for Electric Vehicles

    Get PDF
    Cities have a high potential for solar energy from PVs installed on buildings\u27 rooftops. There is an increased demand for solar energy in cities to reduce the negative effect of climate change. The thesis investigates solar energy potential in urban areas. It tries to determine how to detect and identify available rooftop areas, how to calculate suitable ones after excluding the effects of the shade, and the estimated energy generated from PVs. Geographic Information Sciences (GIS) and Remote Sensing (RS) are used in solar city planning. The goal of this research is to assess available and suitable rooftops areas using different GIS and RS techniques for installing PVs and estimating solar energy production for a sample of six compounds in New Cairo, and explore how to map urban areas on the city scale. In this research, the study area is the new Cairo city which has a high potential for harvesting solar energy, buildings in each compound have the same height, which does not cast shade on other buildings affecting PV efficiency. When applying GIS and RS techniques in New Cairo city, it is found that environmental factors - such as bare soil - affect the accuracy of the result, which reached 67% on the city scale. Researching more minor scales, such as compounds, required Very High Resolution (VHR) satellite images with a spatial resolution of up to 0.5 meter. The RS techniques applied in this research included supervised classification, and feature extraction, on Pleiades-1b VHR. On the compound scale, the accuracy assessment for the samples ranged between 74.6% and 96.875%. Estimating the PV energy production requires solar data; which was collected using a weather station and a pyrometer at the American University in Cairo, which is typical of the neighboring compounds in the new Cairo region. It took three years to collect the solar incidence data. The Hay- Devis, Klucher, and Reindl (HDKR) model is then employed to extrapolate the solar radiation measured on horizontal surfaces β =0°, to that on tilted surfaces with inclination angles β =10°, 20°, 30° and 45°. The calculated (with help of GIS and Solar radiation models) net rooftop area available for capturing solar radiation was determined for sample New Cairo compounds . The available rooftop areas were subject to the restriction that all the PVs would be coplanar, none of the PVs would protrude outside the rooftop boundaries, and no shading of PVs would occur at any time of the year; moreover typical other rooftop occupied areas, and actual dimensions of typical roof top PVs were taken into consideration. From those calculations, both the realistic total annual Electrical energy produced by the PVs and their daily monthly energy produced are deduced. The former is relevant if the PVs are tied to a grid, whereas the other is more relevant if it is not; optimization is different for both. Results were extended to estimate the total number of cars that may be driven off PV converted solar radiation per home, for different scenarios

    Investigating the role of UAVs and convolutional neural networks in the identification of invasive plant species in the Albany Thicket

    Get PDF
    The study aimed to determine whether plant species could be classified by using high resolution aerial imagery and a convolutional neural network (CNN). The full capabilities of a CNN were examined including testing whether the platform could be used for land cover and the evaluation of land change over time. A drone or unmanned aerial vehicle (UAV) was used to collect the aerial data of the study area, and 45 subplots were used for the image analysis. The CNN was coded and operated in RStudio, and digitised data from the input imagery were used as training and validation data by the programme to learn features. Four classifications were performed using various quantities of input data to access the performance of the neural network. In addition, tests were performed to understand whether the CNN could be used as a land cover and land change detection tool. Accuracy assessments were done on the results to test reliability and accuracy. The best-performing classification achieved an average user and producer accuracy of above 90%, while the overall accuracy was 93%, and the kappa coefficient score was 0.86. The CNN was also able to predict the land coverage area of Opuntia to be within 4% of the ground truthing data area. A change in land cover over time was detected by the programme after the manual clearing of the invasive plant had been undertaken. This research has determined that the use of a CNN in remote sensing is a very powerful tool for supervised image classifications and that it can be used for monitoring land cover by accurately estimating the spatial distribution of plant species and by monitoring the species' growth or decline over time. A CNN could also be used as a tool for landowners to prove that they are making efforts to clear invasive species from their land.Thesis (MSc) -- Faculty of Science, School of Environmental Sciences, 202

    Image Segmentation Approaches Applied for the Earth's Surface

    Get PDF
    An analytical review of papers about remote sensing, as well as semantic segmentation and classification methods to process these data, is carried out. Approaches such as template matching-based methods,machine learning and neural networks, as well as the application of knowledge about the analyzed objects are considered. The features of vegetation indices usage for data segmentation by satellite images are considered.Advantages and disadvantages are noted. Recommendations operations for a more accurate classification of thedetected areas on the sequence are give

    An uncertainty prediction approach for active learning - application to earth observation

    Get PDF
    Mapping land cover and land usage dynamics are crucial in remote sensing since farmers are encouraged to either intensify or extend crop use due to the ongoing rise in the world’s population. A major issue in this area is interpreting and classifying a scene captured in high-resolution satellite imagery. Several methods have been put forth, including neural networks which generate data-dependent models (i.e. model is biased toward data) and static rule-based approaches with thresholds which are limited in terms of diversity(i.e. model lacks diversity in terms of rules). However, the problem of having a machine learning model that, given a large amount of training data, can classify multiple classes over different geographic Sentinel-2 imagery that out scales existing approaches remains open. On the other hand, supervised machine learning has evolved into an essential part of many areas due to the increasing number of labeled datasets. Examples include creating classifiers for applications that recognize images and voices, anticipate traffic, propose products, act as a virtual personal assistant and detect online fraud, among many more. Since these classifiers are highly dependent from the training datasets, without human interaction or accurate labels, the performance of these generated classifiers with unseen observations is uncertain. Thus, researchers attempted to evaluate a number of independent models using a statistical distance. However, the problem of, given a train-test split and classifiers modeled over the train set, identifying a prediction error using the relation between train and test sets remains open. Moreover, while some training data is essential for supervised machine learning, what happens if there is insufficient labeled data? After all, assigning labels to unlabeled datasets is a time-consuming process that may need significant expert human involvement. When there aren’t enough expert manual labels accessible for the vast amount of openly available data, active learning becomes crucial. However, given a large amount of training and unlabeled datasets, having an active learning model that can reduce the training cost of the classifier and at the same time assist in labeling new data points remains an open problem. From the experimental approaches and findings, the main research contributions, which concentrate on the issue of optical satellite image scene classification include: building labeled Sentinel-2 datasets with surface reflectance values; proposal of machine learning models for pixel-based image scene classification; proposal of a statistical distance based Evidence Function Model (EFM) to detect ML models misclassification; and proposal of a generalised sampling approach for active learning that, together with the EFM enables a way of determining the most informative examples. Firstly, using a manually annotated Sentinel-2 dataset, Machine Learning (ML) models for scene classification were developed and their performance was compared to Sen2Cor the reference package from the European Space Agency – a micro-F1 value of 84% was attained by the ML model, which is a significant improvement over the corresponding Sen2Cor performance of 59%. Secondly, to quantify the misclassification of the ML models, the Mahalanobis distance-based EFM was devised. This model achieved, for the labeled Sentinel-2 dataset, a micro-F1 of 67.89% for misclassification detection. Lastly, EFM was engineered as a sampling strategy for active learning leading to an approach that attains the same level of accuracy with only 0.02% of the total training samples when compared to a classifier trained with the full training set. With the help of the above-mentioned research contributions, we were able to provide an open-source Sentinel-2 image scene classification package which consists of ready-touse Python scripts and a ML model that classifies Sentinel-2 L1C images generating a 20m-resolution RGB image with the six studied classes (Cloud, Cirrus, Shadow, Snow, Water, and Other) giving academics a straightforward method for rapidly and effectively classifying Sentinel-2 scene images. Additionally, an active learning approach that uses, as sampling strategy, the observed prediction uncertainty given by EFM, will allow labeling only the most informative points to be used as input to build classifiers; Sumário: Uma Abordagem de Previsão de Incerteza para Aprendizagem Ativa – Aplicação à Observação da Terra O mapeamento da cobertura do solo e a dinâmica da utilização do solo são cruciais na deteção remota uma vez que os agricultores são incentivados a intensificar ou estender as culturas devido ao aumento contínuo da população mundial. Uma questão importante nesta área é interpretar e classificar cenas capturadas em imagens de satélite de alta resolução. Várias aproximações têm sido propostas incluindo a utilização de redes neuronais que produzem modelos dependentes dos dados (ou seja, o modelo é tendencioso em relação aos dados) e aproximações baseadas em regras que apresentam restrições de diversidade (ou seja, o modelo carece de diversidade em termos de regras). No entanto, a criação de um modelo de aprendizagem automática que, dada uma uma grande quantidade de dados de treino, é capaz de classificar, com desempenho superior, as imagens do Sentinel-2 em diferentes áreas geográficas permanece um problema em aberto. Por outro lado, têm sido utilizadas técnicas de aprendizagem supervisionada na resolução de problemas nas mais diversas áreas de devido à proliferação de conjuntos de dados etiquetados. Exemplos disto incluem classificadores para aplicações que reconhecem imagem e voz, antecipam tráfego, propõem produtos, atuam como assistentes pessoais virtuais e detetam fraudes online, entre muitos outros. Uma vez que estes classificadores são fortemente dependente do conjunto de dados de treino, sem interação humana ou etiquetas precisas, o seu desempenho sobre novos dados é incerta. Neste sentido existem propostas para avaliar modelos independentes usando uma distância estatística. No entanto, o problema de, dada uma divisão de treino-teste e um classificador, identificar o erro de previsão usando a relação entre aqueles conjuntos, permanece aberto. Mais ainda, embora alguns dados de treino sejam essenciais para a aprendizagem supervisionada, o que acontece quando a quantidade de dados etiquetados é insuficiente? Afinal, atribuir etiquetas é um processo demorado e que exige perícia, o que se traduz num envolvimento humano significativo. Quando a quantidade de dados etiquetados manualmente por peritos é insuficiente a aprendizagem ativa torna-se crucial. No entanto, dada uma grande quantidade dados de treino não etiquetados, ter um modelo de aprendizagem ativa que reduz o custo de treino do classificador e, ao mesmo tempo, auxilia a etiquetagem de novas observações permanece um problema em aberto. A partir das abordagens e estudos experimentais, as principais contribuições deste trabalho, que se concentra na classificação de cenas de imagens de satélite óptico incluem: criação de conjuntos de dados Sentinel-2 etiquetados, com valores de refletância de superfície; proposta de modelos de aprendizagem automática baseados em pixels para classificação de cenas de imagens de satétite; proposta de um Modelo de Função de Evidência (EFM) baseado numa distância estatística para detetar erros de classificação de modelos de aprendizagem; e proposta de uma abordagem de amostragem generalizada para aprendizagem ativa que, em conjunto com o EFM, possibilita uma forma de determinar os exemplos mais informativos. Em primeiro lugar, usando um conjunto de dados Sentinel-2 etiquetado manualmente, foram desenvolvidos modelos de Aprendizagem Automática (AA) para classificação de cenas e seu desempenho foi comparado com o do Sen2Cor – o produto de referência da Agência Espacial Europeia – tendo sido alcançado um valor de micro-F1 de 84% pelo classificador, o que representa uma melhoria significativa em relação ao desempenho Sen2Cor correspondente, de 59%. Em segundo lugar, para quantificar o erro de classificação dos modelos de AA, foi concebido o Modelo de Função de Evidência baseado na distância de Mahalanobis. Este modelo conseguiu, para o conjunto de dados etiquetado do Sentinel-2 um micro-F1 de 67,89% na deteção de classificação incorreta. Por fim, o EFM foi utilizado como uma estratégia de amostragem para a aprendizagem ativa, uma abordagem que permitiu atingir o mesmo nível de desempenho com apenas 0,02% do total de exemplos de treino quando comparado com um classificador treinado com o conjunto de treino completo. Com a ajuda das contribuições acima mencionadas, foi possível desenvolver um pacote de código aberto para classificação de cenas de imagens Sentinel-2 que, utilizando num conjunto de scripts Python, um modelo de classificação, e uma imagem Sentinel-2 L1C, gera a imagem RGB correspondente (com resolução de 20m) com as seis classes estudadas (Cloud, Cirrus, Shadow, Snow, Water e Other), disponibilizando à academia um método direto para a classificação de cenas de imagens do Sentinel-2 rápida e eficaz. Além disso, a abordagem de aprendizagem ativa que usa, como estratégia de amostragem, a deteção de classificacão incorreta dada pelo EFM, permite etiquetar apenas os pontos mais informativos a serem usados como entrada na construção de classificadores

    Remote sensing-based assessment of mangrove ecosystems in the Gulf Cooperation Council countries: a systematic review

    Get PDF
    Mangrove forests in the Gulf Cooperation Council (GCC) countries are facing multiple threats from natural and anthropogenic-driven land use change stressors, contributing to altered ecosystem conditions. Remote sensing tools can be used to monitor mangroves, measure mangrove forest-and-tree-level attributes and vegetation indices at different spatial and temporal scales that allow a detailed and comprehensive understanding of these important ecosystems. Using a systematic literature approach, we reviewed 58 remote sensing-based mangrove assessment articles published from 2010 through 2022. The main objectives of the study were to examine the extent of mangrove distribution and cover, and the remotely sensed data sources used to assess mangrove forest/tree attributes. The key importance of and threats to mangroves that were specific to the region were also examined. Mangrove distribution and cover were mainly estimated from satellite images (75.2%), using NDVI (Normalized Difference Vegetation Index) derived from Landsat (73.3%), IKONOS (15%), Sentinel (11.7%), WorldView (10%), QuickBird (8.3%), SPOT-5 (6.7%), MODIS (5%) and others (5%) such as PlanetScope. Remotely sensed data from aerial photographs/images (6.7%), LiDAR (Light Detection and Ranging) (5%) and UAV (Unmanned Aerial Vehicles)/Drones (3.3%) were the least used. Mangrove cover decreased in Saudi Arabia, Oman, Bahrain, and Kuwait between 1996 and 2020. However, mangrove cover increased appreciably in Qatar and remained relatively stable for the United Arab Emirates (UAE) over the same period, which was attributed to government conservation initiatives toward expanding mangrove afforestation and restoration through direct seeding and seedling planting. The reported country-level mangrove distribution and cover change results varied between studies due to the lack of a standardized methodology, differences in satellite imagery resolution and classification approaches used. There is a need for UAV-LiDAR ground truthing to validate country-and-local-level satellite data. Urban development-driven coastal land reclamation and pollution, climate change-driven temperature and sea level rise, drought and hypersalinity from extreme evaporation are serious threats to mangrove ecosystems. Thus, we encourage the prioritization of mangrove conservation and restoration schemes to support the achievement of related UN Sustainable Development Goals (13 climate action, 14 life below water, and 15 life on land) in the GCC countries
    corecore