671 research outputs found

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Satellite remote sensing of surface winds, waves, and currents: Where are we now?

    Get PDF
    This review paper reports on the state-of-the-art concerning observations of surface winds, waves, and currents from space and their use for scientific research and subsequent applications. The development of observations of sea state parameters from space dates back to the 1970s, with a significant increase in the number and diversity of space missions since the 1990s. Sensors used to monitor the sea-state parameters from space are mainly based on microwave techniques. They are either specifically designed to monitor surface parameters or are used for their abilities to provide opportunistic measurements complementary to their primary purpose. The principles on which is based on the estimation of the sea surface parameters are first described, including the performance and limitations of each method. Numerous examples and references on the use of these observations for scientific and operational applications are then given. The richness and diversity of these applications are linked to the importance of knowledge of the sea state in many fields. Firstly, surface wind, waves, and currents are significant factors influencing exchanges at the air/sea interface, impacting oceanic and atmospheric boundary layers, contributing to sea level rise at the coasts, and interacting with the sea-ice formation or destruction in the polar zones. Secondly, ocean surface currents combined with wind- and wave- induced drift contribute to the transport of heat, salt, and pollutants. Waves and surface currents also impact sediment transport and erosion in coastal areas. For operational applications, observations of surface parameters are necessary on the one hand to constrain the numerical solutions of predictive models (numerical wave, oceanic, or atmospheric models), and on the other hand to validate their results. In turn, these predictive models are used to guarantee safe, efficient, and successful offshore operations, including the commercial shipping and energy sector, as well as tourism and coastal activities. Long-time series of global sea-state observations are also becoming increasingly important to analyze the impact of climate change on our environment. All these aspects are recalled in the article, relating to both historical and contemporary activities in these fields

    Automatic wide area land cover mapping using Sentinel-1 multitemporal data

    Get PDF
    This study introduces a methodology for land cover mapping across extensive areas, utilizing multitemporal Sentinel-1 Synthetic Aperture Radar (SAR) data. The objective is to effectively process SAR data to extract spatio-temporal features that encapsulate temporal patterns within various land cover classes. The paper outlines the approach for processing multitemporal SAR data and presents an innovative technique for the selection of training points from an existing Medium Resolution Land Cover (MRLC) map. The methodology was tested across four distinct regions of interest, each spanning 100 × 100 km2, located in Siberia, Italy, Brazil, and Africa. These regions were chosen to evaluate the methodology’s applicability in diverse climate environments. The study reports both qualitative and quantitative results, showcasing the validity of the proposed procedure and the potential of SAR data for land cover mapping. The experimental outcomes demonstrate an average increase of 16% in overall accuracy compared to existing global products. The results suggest that the presented approach holds promise for enhancing land cover mapping accuracy, particularly when applied to extensive areas with varying land cover classes and environmental conditions. The ability to leverage multitemporal SAR data for this purpose opens new possibilities for improving global land cover maps and their applications

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    A review of technical factors to consider when designing neural networks for semantic segmentation of Earth Observation imagery

    Full text link
    Semantic segmentation (classification) of Earth Observation imagery is a crucial task in remote sensing. This paper presents a comprehensive review of technical factors to consider when designing neural networks for this purpose. The review focuses on Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), and transformer models, discussing prominent design patterns for these ANN families and their implications for semantic segmentation. Common pre-processing techniques for ensuring optimal data preparation are also covered. These include methods for image normalization and chipping, as well as strategies for addressing data imbalance in training samples, and techniques for overcoming limited data, including augmentation techniques, transfer learning, and domain adaptation. By encompassing both the technical aspects of neural network design and the data-related considerations, this review provides researchers and practitioners with a comprehensive and up-to-date understanding of the factors involved in designing effective neural networks for semantic segmentation of Earth Observation imagery.Comment: 145 pages with 32 figure

    BDS GNSS for Earth Observation

    Get PDF
    For millennia, human communities have wondered about the possibility of observing phenomena in their surroundings, and in particular those affecting the Earth on which they live. More generally, it can be conceptually defined as Earth observation (EO) and is the collection of information about the biological, chemical and physical systems of planet Earth. It can be undertaken through sensors in direct contact with the ground or airborne platforms (such as weather balloons and stations) or remote-sensing technologies. However, the definition of EO has only become significant in the last 50 years, since it has been possible to send artificial satellites out of Earth’s orbit. Referring strictly to civil applications, satellites of this type were initially designed to provide satellite images; later, their purpose expanded to include the study of information on land characteristics, growing vegetation, crops, and environmental pollution. The data collected are used for several purposes, including the identification of natural resources and the production of accurate cartography. Satellite observations can cover the land, the atmosphere, and the oceans. Remote-sensing satellites may be equipped with passive instrumentation such as infrared or cameras for imaging the visible or active instrumentation such as radar. Generally, such satellites are non-geostationary satellites, i.e., they move at a certain speed along orbits inclined with respect to the Earth’s equatorial plane, often in polar orbit, at low or medium altitude, Low Earth Orbit (LEO) and Medium Earth Orbit (MEO), thus covering the entire Earth’s surface in a certain scan time (properly called ’temporal resolution’), i.e., in a certain number of orbits around the Earth. The first remote-sensing satellites were the American NASA/USGS Landsat Program; subsequently, the European: ENVISAT (ENVironmental SATellite), ERS (European Remote-Sensing satellite), RapidEye, the French SPOT (Satellite Pour l’Observation de laTerre), and the Canadian RADARSAT satellites were launched. The IKONOS, QuickBird, and GeoEye-1 satellites were dedicated to cartography. The WorldView-1 and WorldView-2 satellites and the COSMO-SkyMed system are more recent. The latest generation are the low payloads called Small Satellites, e.g., the Chinese BuFeng-1 and Fengyun-3 series. Also, Global Navigation Satellite Systems (GNSSs) have captured the attention of researchers worldwide for a multitude of Earth monitoring and exploration applications. On the other hand, over the past 40 years, GNSSs have become an essential part of many human activities. As is widely noted, there are currently four fully operational GNSSs; two of these were developed for military purposes (American NAVstar GPS and Russian GLONASS), whilst two others were developed for civil purposes such as the Chinese BeiDou satellite navigation system (BDS) and the European Galileo. In addition, many other regional GNSSs, such as the South Korean Regional Positioning System (KPS), the Japanese quasi-zenital satellite system (QZSS), and the Indian Regional Navigation Satellite System (IRNSS/NavIC), will become available in the next few years, which will have enormous potential for scientific applications and geomatics professionals. In addition to their traditional role of providing global positioning, navigation, and timing (PNT) information, GNSS navigation signals are now being used in new and innovative ways. Across the globe, new fields of scientific study are opening up to examine how signals can provide information about the characteristics of the atmosphere and even the surfaces from which they are reflected before being collected by a receiver. EO researchers monitor global environmental systems using in situ and remote monitoring tools. Their findings provide tools to support decision makers in various areas of interest, from security to the natural environment. GNSS signals are considered an important new source of information because they are a free, real-time, and globally available resource for the EO community

    A Review of Selected Applications of GNSS CORS and Related Experiences at the University of Palermo (Italy)

    Get PDF
    Services from the Continuously Operating Reference Stations (CORS) of the Global Navigation Satellite System (GNSS) provide data and insights to a range of research areas such as physical sciences, engineering, earth and planetary sciences, computer science, and environmental science. Even though these fields are varied, they are all linked through the GNSS operational application. GNSS CORS have historically been deployed for three-dimensional positioning but also for the establishment of local and global reference systems and the measurement of ionospheric and tropospheric errors. In addition to these studies, CORS is uncovering new, emerging scientific applications. These include real-time monitoring of land subsidence via network real-time kinematics (NRTK) or precise point positioning (PPP), structural health monitoring (SHM), earthquake and volcanology monitoring, GNSS reflectometry (GNSS-R) for mapping soil moisture content, precision farming with affordable receivers, and zenith total delay to aid hydrology and meteorology. The flexibility of CORS infrastructure and services has paved the way for new research areas. The aim of this study is to present a curated selection of scientific papers on prevalent topics such as network monitoring, reference frames, and structure monitoring (like dams), along with an evaluation of CORS performance. Concurrently, it reports on the scientific endeavours undertaken by the Geomatics Research Group at the University of Palermo in the realm of GNSS CORS over the past 15 years

    An uncertainty prediction approach for active learning - application to earth observation

    Get PDF
    Mapping land cover and land usage dynamics are crucial in remote sensing since farmers are encouraged to either intensify or extend crop use due to the ongoing rise in the world’s population. A major issue in this area is interpreting and classifying a scene captured in high-resolution satellite imagery. Several methods have been put forth, including neural networks which generate data-dependent models (i.e. model is biased toward data) and static rule-based approaches with thresholds which are limited in terms of diversity(i.e. model lacks diversity in terms of rules). However, the problem of having a machine learning model that, given a large amount of training data, can classify multiple classes over different geographic Sentinel-2 imagery that out scales existing approaches remains open. On the other hand, supervised machine learning has evolved into an essential part of many areas due to the increasing number of labeled datasets. Examples include creating classifiers for applications that recognize images and voices, anticipate traffic, propose products, act as a virtual personal assistant and detect online fraud, among many more. Since these classifiers are highly dependent from the training datasets, without human interaction or accurate labels, the performance of these generated classifiers with unseen observations is uncertain. Thus, researchers attempted to evaluate a number of independent models using a statistical distance. However, the problem of, given a train-test split and classifiers modeled over the train set, identifying a prediction error using the relation between train and test sets remains open. Moreover, while some training data is essential for supervised machine learning, what happens if there is insufficient labeled data? After all, assigning labels to unlabeled datasets is a time-consuming process that may need significant expert human involvement. When there aren’t enough expert manual labels accessible for the vast amount of openly available data, active learning becomes crucial. However, given a large amount of training and unlabeled datasets, having an active learning model that can reduce the training cost of the classifier and at the same time assist in labeling new data points remains an open problem. From the experimental approaches and findings, the main research contributions, which concentrate on the issue of optical satellite image scene classification include: building labeled Sentinel-2 datasets with surface reflectance values; proposal of machine learning models for pixel-based image scene classification; proposal of a statistical distance based Evidence Function Model (EFM) to detect ML models misclassification; and proposal of a generalised sampling approach for active learning that, together with the EFM enables a way of determining the most informative examples. Firstly, using a manually annotated Sentinel-2 dataset, Machine Learning (ML) models for scene classification were developed and their performance was compared to Sen2Cor the reference package from the European Space Agency – a micro-F1 value of 84% was attained by the ML model, which is a significant improvement over the corresponding Sen2Cor performance of 59%. Secondly, to quantify the misclassification of the ML models, the Mahalanobis distance-based EFM was devised. This model achieved, for the labeled Sentinel-2 dataset, a micro-F1 of 67.89% for misclassification detection. Lastly, EFM was engineered as a sampling strategy for active learning leading to an approach that attains the same level of accuracy with only 0.02% of the total training samples when compared to a classifier trained with the full training set. With the help of the above-mentioned research contributions, we were able to provide an open-source Sentinel-2 image scene classification package which consists of ready-touse Python scripts and a ML model that classifies Sentinel-2 L1C images generating a 20m-resolution RGB image with the six studied classes (Cloud, Cirrus, Shadow, Snow, Water, and Other) giving academics a straightforward method for rapidly and effectively classifying Sentinel-2 scene images. Additionally, an active learning approach that uses, as sampling strategy, the observed prediction uncertainty given by EFM, will allow labeling only the most informative points to be used as input to build classifiers; Sumário: Uma Abordagem de Previsão de Incerteza para Aprendizagem Ativa – Aplicação à Observação da Terra O mapeamento da cobertura do solo e a dinâmica da utilização do solo são cruciais na deteção remota uma vez que os agricultores são incentivados a intensificar ou estender as culturas devido ao aumento contínuo da população mundial. Uma questão importante nesta área é interpretar e classificar cenas capturadas em imagens de satélite de alta resolução. Várias aproximações têm sido propostas incluindo a utilização de redes neuronais que produzem modelos dependentes dos dados (ou seja, o modelo é tendencioso em relação aos dados) e aproximações baseadas em regras que apresentam restrições de diversidade (ou seja, o modelo carece de diversidade em termos de regras). No entanto, a criação de um modelo de aprendizagem automática que, dada uma uma grande quantidade de dados de treino, é capaz de classificar, com desempenho superior, as imagens do Sentinel-2 em diferentes áreas geográficas permanece um problema em aberto. Por outro lado, têm sido utilizadas técnicas de aprendizagem supervisionada na resolução de problemas nas mais diversas áreas de devido à proliferação de conjuntos de dados etiquetados. Exemplos disto incluem classificadores para aplicações que reconhecem imagem e voz, antecipam tráfego, propõem produtos, atuam como assistentes pessoais virtuais e detetam fraudes online, entre muitos outros. Uma vez que estes classificadores são fortemente dependente do conjunto de dados de treino, sem interação humana ou etiquetas precisas, o seu desempenho sobre novos dados é incerta. Neste sentido existem propostas para avaliar modelos independentes usando uma distância estatística. No entanto, o problema de, dada uma divisão de treino-teste e um classificador, identificar o erro de previsão usando a relação entre aqueles conjuntos, permanece aberto. Mais ainda, embora alguns dados de treino sejam essenciais para a aprendizagem supervisionada, o que acontece quando a quantidade de dados etiquetados é insuficiente? Afinal, atribuir etiquetas é um processo demorado e que exige perícia, o que se traduz num envolvimento humano significativo. Quando a quantidade de dados etiquetados manualmente por peritos é insuficiente a aprendizagem ativa torna-se crucial. No entanto, dada uma grande quantidade dados de treino não etiquetados, ter um modelo de aprendizagem ativa que reduz o custo de treino do classificador e, ao mesmo tempo, auxilia a etiquetagem de novas observações permanece um problema em aberto. A partir das abordagens e estudos experimentais, as principais contribuições deste trabalho, que se concentra na classificação de cenas de imagens de satélite óptico incluem: criação de conjuntos de dados Sentinel-2 etiquetados, com valores de refletância de superfície; proposta de modelos de aprendizagem automática baseados em pixels para classificação de cenas de imagens de satétite; proposta de um Modelo de Função de Evidência (EFM) baseado numa distância estatística para detetar erros de classificação de modelos de aprendizagem; e proposta de uma abordagem de amostragem generalizada para aprendizagem ativa que, em conjunto com o EFM, possibilita uma forma de determinar os exemplos mais informativos. Em primeiro lugar, usando um conjunto de dados Sentinel-2 etiquetado manualmente, foram desenvolvidos modelos de Aprendizagem Automática (AA) para classificação de cenas e seu desempenho foi comparado com o do Sen2Cor – o produto de referência da Agência Espacial Europeia – tendo sido alcançado um valor de micro-F1 de 84% pelo classificador, o que representa uma melhoria significativa em relação ao desempenho Sen2Cor correspondente, de 59%. Em segundo lugar, para quantificar o erro de classificação dos modelos de AA, foi concebido o Modelo de Função de Evidência baseado na distância de Mahalanobis. Este modelo conseguiu, para o conjunto de dados etiquetado do Sentinel-2 um micro-F1 de 67,89% na deteção de classificação incorreta. Por fim, o EFM foi utilizado como uma estratégia de amostragem para a aprendizagem ativa, uma abordagem que permitiu atingir o mesmo nível de desempenho com apenas 0,02% do total de exemplos de treino quando comparado com um classificador treinado com o conjunto de treino completo. Com a ajuda das contribuições acima mencionadas, foi possível desenvolver um pacote de código aberto para classificação de cenas de imagens Sentinel-2 que, utilizando num conjunto de scripts Python, um modelo de classificação, e uma imagem Sentinel-2 L1C, gera a imagem RGB correspondente (com resolução de 20m) com as seis classes estudadas (Cloud, Cirrus, Shadow, Snow, Water e Other), disponibilizando à academia um método direto para a classificação de cenas de imagens do Sentinel-2 rápida e eficaz. Além disso, a abordagem de aprendizagem ativa que usa, como estratégia de amostragem, a deteção de classificacão incorreta dada pelo EFM, permite etiquetar apenas os pontos mais informativos a serem usados como entrada na construção de classificadores

    InSAR‐Derived Horizontal Velocities in a Global Reference Frame

    Get PDF
    Interferometric Synthetic Aperture Radar is used to measure deformation rates over continents to constrain tectonic processes. The resulting velocity measurements are only relative, however, due to unknown integer ambiguities introduced during propagation of the signal through the atmosphere. These ambiguities mostly cancel when using spectral diversity to estimate along-track motion, allowing measurements to be made with respect to a global reference frame. Here, we calculate along-track velocities for a partial global data set of Sentinel-1 acquisitions and find good agreement with ITRF2014 plate motion model and measurements from GPS stations. We include corrections for solid-earth tides and gradients of ionospheric total electron content. Combining data from ascending and descending orbits we are able to estimate north and east velocities over 250 × 250 km areas and their accuracy of 4 and 23 mm/year, respectively. These “absolute” measurements can be particularly useful for global velocity and strain rate estimation where GNSS measurements are sparse
    corecore