38 research outputs found

    Novel pattern recognition methods for classification and detection in remote sensing and power generation applications

    Get PDF
    Novel pattern recognition methods for classification and detection in remote sensing and power generation application

    Earthquake damage assessment in urban area from Very High Resolution satellite data

    Get PDF
    The use of remote sensing within the domain of natural hazards and disaster management has become increasingly popular, due in part to increased awareness of environmental issues, including climate change, but also to the improvement of geospatial technologies and the ability to provide high quality imagery to the public through the media and internet. As technology is enhanced, demand and expectations increase for near-real-time monitoring and images to be relayed to emergency services in the event of a natural disaster. During a seismic event, in particular, it is fundamental to obtain a fast and reliable map of the damage of urban areas to manage civil protection interventions. Moreover, the identification of the destruction caused by an earthquake provides seismology and earthquake engineers with informative and valuable data, experiences and lessons in the long term. An accurate survey of damage is also important to assess the economic losses, and to manage and share the resources to be allocated during the reconstruction phase. Satellite remote sensing can provide valuable pieces of information on this regard, thanks to the capability of an instantaneous synoptic view of the scene, especially if the seismic event is located in remote regions, or if the main communication systems are damaged. Many works exist in the literature on this topic, considering both optical data and radar data, which however put in evidence some limitations of the nadir looking view, of the achievable level of details and response time, and the criticality of image radiometric and geometric corrections. The visual interpretation of optical images collected before and after a seismic event is the approach followed in many cases, especially for an operational and rapid release of the damage extension map. Many papers, have evaluated change detection approaches to estimate damage within large areas (e.g., city blocks), trying to quantify not only the extension of the affected area but also the level of damage, for instance correlating the collapse ratio (percentage of collapsed buildings in an area) measured on ground with some change parameters derived from two images, taken before and after the earthquake. Nowadays, remotely sensed images at Very High Resolution (VHR) may in principle enable production of earthquake damage maps at single-building scale. The complexity of the image forming mechanisms within urban settlements, especially of radar images, makes the interpretation and analysis of VHR images still a challenging task. Discrimination of lower grade of damage is particularly difficult using nadir looking sensors. Automatic algorithms to detect the damage are being developed, although as matter of fact, these works focus very often on specific test cases and sort of canonical situations. In order to make the delivered product suitable for the user community, such for example Civil Protection Departments, it is important to assess its reliability on a large area and in different and challenging situations. Moreover, the assessment shall be directly compared to those data the final user adopts when carrying out its operational tasks. This kind of assessment can be hardly found in the literature, especially when the main focus is on the development of sophisticated and advanced algorithms. In this work, the feasibility of earthquake damage products at the scale of individual buildings, which relies on a damage scale recognized as a standard, is investigated. To this aim, damage maps derived from VHR satellite images collected by Synthetic Aperture Radar (SAR) and optical sensors, were systematically compared to ground surveys carried out by different teams and with different purposes and protocols. Moreover, the inclusion of a priori information, such as vulnerability models for buildings and soil geophysical properties, to improve the reliability of the resulting damage products, was considered in this study. The research activity presented in this thesis was carried out in the framework of the APhoRISM (Advanced PRocedures for volcanIc Seismic Monitoring) project, funded by the European Union under the EC-FP7 call. APhoRISM was aimed at demonstrating that an appropriate management and integration of satellite and ground data can provide new improved products useful for seismic and volcanic crisis management

    Statistical and Machine Learning Models for Remote Sensing Data Mining - Recent Advancements

    Get PDF
    This book is a reprint of the Special Issue entitled "Statistical and Machine Learning Models for Remote Sensing Data Mining - Recent Advancements" that was published in Remote Sensing, MDPI. It provides insights into both core technical challenges and some selected critical applications of satellite remote sensing image analytics

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    A review of machine learning applications in wildfire science and management

    Full text link
    Artificial intelligence has been applied in wildfire science and management since the 1990s, with early applications including neural networks and expert systems. Since then the field has rapidly progressed congruently with the wide adoption of machine learning (ML) in the environmental sciences. Here, we present a scoping review of ML in wildfire science and management. Our objective is to improve awareness of ML among wildfire scientists and managers, as well as illustrate the challenging range of problems in wildfire science available to data scientists. We first present an overview of popular ML approaches used in wildfire science to date, and then review their use in wildfire science within six problem domains: 1) fuels characterization, fire detection, and mapping; 2) fire weather and climate change; 3) fire occurrence, susceptibility, and risk; 4) fire behavior prediction; 5) fire effects; and 6) fire management. We also discuss the advantages and limitations of various ML approaches and identify opportunities for future advances in wildfire science and management within a data science context. We identified 298 relevant publications, where the most frequently used ML methods included random forests, MaxEnt, artificial neural networks, decision trees, support vector machines, and genetic algorithms. There exists opportunities to apply more current ML methods (e.g., deep learning and agent based learning) in wildfire science. However, despite the ability of ML models to learn on their own, expertise in wildfire science is necessary to ensure realistic modelling of fire processes across multiple scales, while the complexity of some ML methods requires sophisticated knowledge for their application. Finally, we stress that the wildfire research and management community plays an active role in providing relevant, high quality data for use by practitioners of ML methods.Comment: 83 pages, 4 figures, 3 table
    corecore