417 research outputs found

    Unsupervised multi-scale change detection from SAR imagery for monitoring natural and anthropogenic disasters

    Get PDF
    Thesis (Ph.D.) University of Alaska Fairbanks, 2017Radar remote sensing can play a critical role in operational monitoring of natural and anthropogenic disasters. Despite its all-weather capabilities, and its high performance in mapping, and monitoring of change, the application of radar remote sensing in operational monitoring activities has been limited. This has largely been due to: (1) the historically high costs associated with obtaining radar data; (2) slow data processing, and delivery procedures; and (3) the limited temporal sampling that was provided by spaceborne radar-based satellites. Recent advances in the capabilities of spaceborne Synthetic Aperture Radar (SAR) sensors have developed an environment that now allows for SAR to make significant contributions to disaster monitoring. New SAR processing strategies that can take full advantage of these new sensor capabilities are currently being developed. Hence, with this PhD dissertation, I aim to: (i) investigate unsupervised change detection techniques that can reliably extract signatures from time series of SAR images, and provide the necessary flexibility for application to a variety of natural, and anthropogenic hazard situations; (ii) investigate effective methods to reduce the effects of speckle and other noise on change detection performance; (iii) automate change detection algorithms using probabilistic Bayesian inferencing; and (iv) ensure that the developed technology is applicable to current, and future SAR sensors to maximize temporal sampling of a hazardous event. This is achieved by developing new algorithms that rely on image amplitude information only, the sole image parameter that is available for every single SAR acquisition. The motivation and implementation of the change detection concept are described in detail in Chapter 3. In the same chapter, I demonstrated the technique's performance using synthetic data as well as a real-data application to map wildfire progression. I applied Radiometric Terrain Correction (RTC) to the data to increase the sampling frequency, while the developed multiscaledriven approach reliably identified changes embedded in largely stationary background scenes. With this technique, I was able to identify the extent of burn scars with high accuracy. I further applied the application of the change detection technology to oil spill mapping. The analysis highlights that the approach described in Chapter 3 can be applied to this drastically different change detection problem with only little modification. While the core of the change detection technique remained unchanged, I made modifications to the pre-processing step to enable change detection from scenes of continuously varying background. I introduced the Lipschitz regularity (LR) transformation as a technique to normalize the typically dynamic ocean surface, facilitating high performance oil spill detection independent of environmental conditions during image acquisition. For instance, I showed that LR processing reduces the sensitivity of change detection performance to variations in surface winds, which is a known limitation in oil spill detection from SAR. Finally, I applied the change detection technique to aufeis flood mapping along the Sagavanirktok River. Due to the complex nature of aufeis flooded areas, I substituted the resolution-preserving speckle filter used in Chapter 3 with curvelet filters. In addition to validating the performance of the change detection results, I also provide evidence of the wealth of information that can be extracted about aufeis flooding events once a time series of change detection information was extracted from SAR imagery. A summary of the developed change detection techniques is conducted and suggested future work is presented in Chapter 6

    HED-UNet: Combined Segmentation and Edge Detection for Monitoring the Antarctic Coastline

    Full text link
    Deep learning-based coastline detection algorithms have begun to outshine traditional statistical methods in recent years. However, they are usually trained only as single-purpose models to either segment land and water or delineate the coastline. In contrast to this, a human annotator will usually keep a mental map of both segmentation and delineation when performing manual coastline detection. To take into account this task duality, we therefore devise a new model to unite these two approaches in a deep learning model. By taking inspiration from the main building blocks of a semantic segmentation framework (UNet) and an edge detection framework (HED), both tasks are combined in a natural way. Training is made efficient by employing deep supervision on side predictions at multiple resolutions. Finally, a hierarchical attention mechanism is introduced to adaptively merge these multiscale predictions into the final model output. The advantages of this approach over other traditional and deep learning-based methods for coastline detection are demonstrated on a dataset of Sentinel-1 imagery covering parts of the Antarctic coast, where coastline detection is notoriously difficult. An implementation of our method is available at \url{https://github.com/khdlr/HED-UNet}.Comment: This work has been accepted by IEEE TGRS for publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Enhancing spatial resolution of remotely sensed data for mapping freshwater environments

    Get PDF
    Freshwater environments are important for ecosystem services and biodiversity. These environments are subject to many natural and anthropogenic changes, which influence their quality; therefore, regular monitoring is required for their effective management. High biotic heterogeneity, elongated land/water interaction zones, and logistic difficulties with access make field based monitoring on a large scale expensive, inconsistent and often impractical. Remote sensing (RS) is an established mapping tool that overcomes these barriers. However, complex and heterogeneous vegetation and spectral variability due to water make freshwater environments challenging to map using remote sensing technology. Satellite images available for New Zealand were reviewed, in terms of cost, and spectral and spatial resolution. Particularly promising image data sets for freshwater mapping include the QuickBird and SPOT-5. However, for mapping freshwater environments a combination of images is required to obtain high spatial, spectral, radiometric, and temporal resolution. Data fusion (DF) is a framework of data processing tools and algorithms that combines images to improve spectral and spatial qualities. A range of DF techniques were reviewed and tested for performance using panchromatic and multispectral QB images of a semi-aquatic environment, on the southern shores of Lake Taupo, New Zealand. In order to discuss the mechanics of different DF techniques a classification consisting of three groups was used - (i) spatially-centric (ii) spectrally-centric and (iii) hybrid. Subtract resolution merge (SRM) is a hybrid technique and this research demonstrated that for a semi aquatic QuickBird image it out performed Brovey transformation (BT), principal component substitution (PCS), local mean and variance matching (LMVM), and optimised high pass filter addition (OHPFA). However some limitations were identified with SRM, which included the requirement for predetermined band weights, and the over-representation of the spatial edges in the NIR bands due to their high spectral variance. This research developed three modifications to the SRM technique that addressed these limitations. These were tested on QuickBird (QB), SPOT-5, and Vexcel aerial digital images, as well as a scanned coloured aerial photograph. A visual qualitative assessment and a range of spectral and spatial quantitative metrics were used to evaluate these modifications. These included spectral correlation and root mean squared error (RMSE), Sobel filter based spatial edges RMSE, and unsupervised classification. The first modification addressed the issue of predetermined spectral weights and explored two alternative regression methods (Least Absolute Deviation, and Ordinary Least Squares) to derive image-specific band weights for use in SRM. Both methods were found equally effective; however, OLS was preferred as it was more efficient in processing band weights compared to LAD. The second modification used a pixel block averaging function on high resolution panchromatic images to derive spatial edges for data fusion. This eliminated the need for spectral band weights, minimised spectral infidelity, and enabled the fusion of multi-platform data. The third modification addressed the issue of over-represented spatial edges by introducing a sophisticated contrast and luminance index to develop a new normalising function. This improved the spatial representation of the NIR band, which is particularly important for mapping vegetation. A combination of the second and third modification of SRM was effective in simultaneously minimising the overall spectral infidelity and undesired spatial errors for the NIR band of the fused image. This new method has been labelled Contrast and Luminance Normalised (CLN) data fusion, and has been demonstrated to make a significant contribution in fusing multi-platform, multi-sensor, multi-resolution, and multi-temporal data. This contributes to improvements in the classification and monitoring of fresh water environments using remote sensing

    Earthquake damage assessment in urban area from Very High Resolution satellite data

    Get PDF
    The use of remote sensing within the domain of natural hazards and disaster management has become increasingly popular, due in part to increased awareness of environmental issues, including climate change, but also to the improvement of geospatial technologies and the ability to provide high quality imagery to the public through the media and internet. As technology is enhanced, demand and expectations increase for near-real-time monitoring and images to be relayed to emergency services in the event of a natural disaster. During a seismic event, in particular, it is fundamental to obtain a fast and reliable map of the damage of urban areas to manage civil protection interventions. Moreover, the identification of the destruction caused by an earthquake provides seismology and earthquake engineers with informative and valuable data, experiences and lessons in the long term. An accurate survey of damage is also important to assess the economic losses, and to manage and share the resources to be allocated during the reconstruction phase. Satellite remote sensing can provide valuable pieces of information on this regard, thanks to the capability of an instantaneous synoptic view of the scene, especially if the seismic event is located in remote regions, or if the main communication systems are damaged. Many works exist in the literature on this topic, considering both optical data and radar data, which however put in evidence some limitations of the nadir looking view, of the achievable level of details and response time, and the criticality of image radiometric and geometric corrections. The visual interpretation of optical images collected before and after a seismic event is the approach followed in many cases, especially for an operational and rapid release of the damage extension map. Many papers, have evaluated change detection approaches to estimate damage within large areas (e.g., city blocks), trying to quantify not only the extension of the affected area but also the level of damage, for instance correlating the collapse ratio (percentage of collapsed buildings in an area) measured on ground with some change parameters derived from two images, taken before and after the earthquake. Nowadays, remotely sensed images at Very High Resolution (VHR) may in principle enable production of earthquake damage maps at single-building scale. The complexity of the image forming mechanisms within urban settlements, especially of radar images, makes the interpretation and analysis of VHR images still a challenging task. Discrimination of lower grade of damage is particularly difficult using nadir looking sensors. Automatic algorithms to detect the damage are being developed, although as matter of fact, these works focus very often on specific test cases and sort of canonical situations. In order to make the delivered product suitable for the user community, such for example Civil Protection Departments, it is important to assess its reliability on a large area and in different and challenging situations. Moreover, the assessment shall be directly compared to those data the final user adopts when carrying out its operational tasks. This kind of assessment can be hardly found in the literature, especially when the main focus is on the development of sophisticated and advanced algorithms. In this work, the feasibility of earthquake damage products at the scale of individual buildings, which relies on a damage scale recognized as a standard, is investigated. To this aim, damage maps derived from VHR satellite images collected by Synthetic Aperture Radar (SAR) and optical sensors, were systematically compared to ground surveys carried out by different teams and with different purposes and protocols. Moreover, the inclusion of a priori information, such as vulnerability models for buildings and soil geophysical properties, to improve the reliability of the resulting damage products, was considered in this study. The research activity presented in this thesis was carried out in the framework of the APhoRISM (Advanced PRocedures for volcanIc Seismic Monitoring) project, funded by the European Union under the EC-FP7 call. APhoRISM was aimed at demonstrating that an appropriate management and integration of satellite and ground data can provide new improved products useful for seismic and volcanic crisis management

    Airborne laser sensors and integrated systems

    Get PDF
    The underlying principles and technologies enabling the design and operation of airborne laser sensors are introduced and a detailed review of state-of-the-art avionic systems for civil and military applications is presented. Airborne lasers including Light Detection and Ranging (LIDAR), Laser Range Finders (LRF), and Laser Weapon Systems (LWS) are extensively used today and new promising technologies are being explored. Most laser systems are active devices that operate in a manner very similar to microwave radars but at much higher frequencies (e.g., LIDAR and LRF). Other devices (e.g., laser target designators and beam-riders) are used to precisely direct Laser Guided Weapons (LGW) against ground targets. The integration of both functions is often encountered in modern military avionics navigation-attack systems. The beneficial effects of airborne lasers including the use of smaller components and remarkable angular resolution have resulted in a host of manned and unmanned aircraft applications. On the other hand, laser sensors performance are much more sensitive to the vagaries of the atmosphere and are thus generally restricted to shorter ranges than microwave systems. Hence it is of paramount importance to analyse the performance of laser sensors and systems in various weather and environmental conditions. Additionally, it is important to define airborne laser safety criteria, since several systems currently in service operate in the near infrared with considerable risk for the naked human eye. Therefore, appropriate methods for predicting and evaluating the performance of infrared laser sensors/systems are presented, taking into account laser safety issues. For aircraft experimental activities with laser systems, it is essential to define test requirements taking into account the specific conditions for operational employment of the systems in the intended scenarios and to verify the performance in realistic environments at the test ranges. To support the development of such requirements, useful guidelines are provided for test and evaluation of airborne laser systems including laboratory, ground and flight test activities

    Land Surface Monitoring Based on Satellite Imagery

    Get PDF
    This book focuses attention on significant novel approaches developed to monitor land surface by exploiting satellite data in the infrared and visible ranges. Unlike in situ measurements, satellite data provide global coverage and higher temporal resolution, with very accurate retrievals of land parameters. This is fundamental in the study of climate change and global warming. The authors offer an overview of different methodologies to retrieve land surface parameters— evapotranspiration, emissivity contrast and water deficit indices, land subsidence, leaf area index, vegetation height, and crop coefficient—all of which play a significant role in the study of land cover, land use, monitoring of vegetation and soil water stress, as well as early warning and detection of forest ïŹres and drought

    Data-driven Regularization and Uncertainty Estimation to Improve Sea Ice Data Assimilation

    Get PDF
    Accurate estimates of sea ice conditions such as ice thickness and ice concentration in the ice-covered regions are critical for shipping activities, ice operations and weather forecasting. The need for this information has increased due to the recent record of decline in Arctic ice extent and thinning of the ice cover, which has resulted in more shipping activities and climate studies. Despite the extensive studies and progress to improve the quality of sea ice forecasts from prognostic models, there is still significant room for improvement. For example, ice-ocean models have difficulty estimating the ice thickness distribution accurately. To help improve model forecasts, data assimilation is used to combine observational data with model forecasts and produce more accurate estimates. The assimilation of ice thickness observations, compared to other ice parameters such as ice concentration, is still relatively unexplored since the satellite-based ice thickness observations have only recently become common. Also, preserving sharp features of ice cover, such as leads and ridges, can be difficult, due to the spatial correlations in the background error covariance matrices. At the same time, the current ice concentration assimilation systems do not directly assimilate high resolution sea ice information from synthetic aperture radar (SAR), even though they are the main source of information for operational production of ice chart products at the Canadian Ice Service. The key challenge in SAR data assimilation is automating the interpretation of SAR images. To address the problem of assimilating ice thickness observations while preserving sharp features, two different objective functions are studied. One with a conventional l2-norm and one imposing an additional l1-norm on the derivative of the ice thickness state estimate as a sparse regularization. The latter is motivated by analysis of high resolution ice thickness observations derived from an airborne electromagnetic sensor demonstrating the sparsity of the ice thickness in the derivative domain. The data fusion and data assimilation experiments are performed over a wide range of background and observation error correlation length scales. Results demonstrate the superiority of using a combined l1-l2 regularization framework especially when the background error correlation length scale was relatively short (approximately five times the analysis grid spacing). The problem of automated information retrieval from SAR images has been explored in a problem of ice/water classification. The selected classification approach takes advantage of neural networks to produce results comparable to a previous study using logistic regression. The employed dataset in both studies is a comprehensive dataset consisting of 15405 SAR images over a seven year period, covering all months and different locations. In addition, recent neural network uncertainty estimation approaches are employed to estimate the uncertainty associated with the classification of ice/water labels, which was not explored in this problem domain previously. These predicted uncertainties can improve the automated classification process by identifying regions in the predictions that should be checked manually by an analyst

    Flood mapping from radar remote sensing using automated image classification techniques

    Get PDF

    Research theme reports from April 1, 2019 - March 31, 2020

    Get PDF
    • 

    corecore