2,385 research outputs found

    Edge Detection in UAV Remote Sensing Images Using the Method Integrating Zernike Moments with Clustering Algorithms

    Get PDF
    Due to the unmanned aerial vehicle remote sensing images (UAVRSI) within rich texture details of ground objects and obvious phenomenon, the same objects with different spectra, it is difficult to effectively acquire the edge information using traditional edge detection operator. To solve this problem, an edge detection method of UAVRSI by combining Zernike moments with clustering algorithms is proposed in this study. To begin with, two typical clustering algorithms, namely, fuzzy c-means (FCM) and K-means algorithms, are used to cluster the original remote sensing images so as to form homogeneous regions in ground objects. Then, Zernike moments are applied to carry out edge detection on the remote sensing images clustered. Finally, visual comparison and sensitivity methods are adopted to evaluate the accuracy of the edge information detected. Afterwards, two groups of experimental data are selected to verify the proposed method. Results show that the proposed method effectively improves the accuracy of edge information extracted from remote sensing images

    Comparative Performance Study and Analysis on Different Edge based Image Segmentation Techniques of Thermal Images

    Get PDF
    In this work, authors have been tried to analyse the edge-based approach for thermal image segmentation. Here, author?s have used different thermal images for the edge based analysis of image segmentation. Author?s have given studies regarding different edge operators like Prewitt, Sobel, LoG, and Canny edge detection operators for segmentation purposes and analyze their performance. This paper compares each of these operators by the manner of checking Peak signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) of resultant image. It evaluates the performance of each algorithm using image quality analysis. This paper presents a comparative analysis of different edge based thermal image segmentation techniques

    Assessing Building Vulnerability to Tsunami Hazard Using Integrative Remote Sensing and GIS Approaches

    Get PDF
    Risk and vulnerability assessment for natural hazards is of high interest. Various methods focusing on building vulnerability assessment have been developed ranging from simple approaches to sophisticated ones depending on the objectives of the study, the availability of data and technology. In-situ assessment methods have been widely used to measure building vulnerability to various types of hazards while remote sensing methods, specifically developed for assessing building vulnerability to tsunami hazard, are still very limited. The combination of remote sensing approaches with in-situ methods offers unique opportunities to overcome limitations of in-situ assessments. The main objective of this research is to develop remote sensing techniques in assessing building vulnerability to tsunami hazard as one of the key elements of risk assessment. The research work has been performed in the framework of the GITEWS (German-Indonesian Tsunami Early Warning System) project. This research contributes to two major components of tsunami risk assessment: (1) the provision of infrastructure vulnerability information as an important element in the exposure assessment; (2) tsunami evacuation modelling which is a critical element for assessing immediate response and capability to evacuate as part of the coping capacity analysis. The newly developed methodology is based on the combination of in-situ measurements and remote sensing techniques in a so-called “bottom-up remote sensing approach”. Within this approach, basic information was acquired by in-situ data collection (bottom level), which was then used as input for further analysis in the remote sensing approach (upper level). The results of this research show that a combined in-situ measurement and remote sensing approach can be successfully employed to assess and classify buildings into 4 classes based on their level of vulnerability to tsunami hazard with an accuracy of more than 80 percent. Statistical analysis successfully revealed key spatial parameters which were regarded to link parameters between in-situ and remote sensing approach such as size, height, shape, regularity, orientation, and accessibility. The key spatial parameters and their specified threshold values were implemented in a decision tree algorithm for developing a remote sensing rule-set of building vulnerability classification. A big number of buildings in the study area (Cilacap city, Indonesia) were successfully classified into the building vulnerability classes. The categorization ranges from high to low vulnerable buildings (A to C) and includes also a category of buildings which are potentially suitable for tsunami vertical evacuation (VE). A multi-criteria analysis was developed that incorporates three main components for vulnerability assessment: stability, tsunami resistance and accessibility. All the defined components were configured in a decision tree algorithm by applying weighting, scoring and threshold definition based on the building sample data. Stability components consist of structure parameters, which are closely related to the building stability against earthquake energy. Building stability needs to be analyzed because most of tsunami events in Indonesia are preceded by major earthquakes. Stability components analysis was applied in the first step of the newly developed decision tree algorithm to evaluate the building stability when earthquake strikes. Buildings with total scores below the defined threshold of stability were classified as the most vulnerable class A. Such the buildings have a high probability of being damaged after earthquake events. The remaining buildings with total scores above the defined threshold of stability were further analyzed using tsunami components and accessibility components to classify them into the vulnerability classes B, C and VE respectively. This research is based on very high spatial resolution satellite images (QuickBird) and object-based image analysis. Object-based image analysis is was chosen, because it allows the formulation of rule-sets based on image objects instead of pixels, which has significant advantages especially for the analysis of very high resolution satellite images. In the pre-processing stage, three image processing steps were performed: geometric correction, pan-sharpening and filtering. Adaptive Local Sigma and Morphological Opening filter techniques were applied as basis for the subsequent building edge detection. The data pre-processing significantly increased the accuracy of the following steps of image classification. In the next step image segmentation was developed to extract adequate image objects to be used for further classification. Image classification was carried out by grouping resulting objects into desired classes based on the derived object features. A single object was assigned by its feature characteristics calculated in the segmentation process. The characteristic features of an object - which were grouped into spectral signature, shape, size, texture, and neighbouring relations - were analysed, selected and semantically modelled to classify objects into object classes. Fuzzy logic algorithm and object feature separation analysis was performed to set the member¬ship values of objects that were grouped into particular classes. Finally this approach successfully detected and mapped building objects in the study area with their spatial attributes which provide base information for building vulnerability classification. A building vulnerability classification rule-set has been developed in this research and successfully applied to categorize building vulnerability classes. The developed approach was applied for Cilacap city, Indonesia. In order to analyze the transferability of this newly developed approach, the algorithm was also applied to Padang City, Indonesia. The results showed that the developed methodology is in general transferable. However, it requires some adaptations (e.g. thresholds) to provide accurate results. The results of this research show that Cilacap City is very vulnerable to tsunami hazard. Class A (very vulnerable) buildings cover the biggest portion of area in Cilacap City (63%), followed by class C (28%), class VE (6%) and class B (3%). Preventive measures should be carried out for the purpose of disaster risk reduction, especially for people living in such the most vulnerable buildings. Finally, the results were applied for tsunami evacuation modeling. The buildings, which were categorized as potential candidates for vertical evacuation, were selected and a GIS approach was applied to model evacuation time and evacuation routes. The results of this analysis provide important inputs to the disaster management authorities for future evacuation planning and disaster mitigation

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    A Method for detection and quantification of building damage using post-disaster LiDAR data

    Get PDF
    There is a growing need for rapid and accurate damage assessment following natural disasters, terrorist attacks, and other crisis situations. The use of light detection and ranging (LiDAR) data to detect and quantify building damage following a natural disaster was investigated in this research. Using LiDAR data collected by the Rochester Institute of Technology (RIT) just days after the January 12, 2010 Haiti earthquake, a set of processes was developed for extracting buildings in urban environments and assessing structural damage. Building points were separated from the rest of the point cloud using a combination of point classification techniques involving height, intensity, and multiple return information, as well as thresholding and morphological filtering operations. Damage was detected by measuring the deviation between building roof points and dominant planes found using a normal vector and height variance approach. The devised algorithms were incorporated into a Matlab graphical user interface (GUI), which guided the workflow and allowed for user interaction. The semi-autonomous tool ingests a discrete-return LiDAR point cloud of a post-disaster scene, and outputs a building damage map highlighting damaged and collapsed buildings. The entire approach was demonstrated on a set of six validation sites, carefully selected from the Haiti LiDAR data. A combined 85.6% of the truth buildings in all of the sites were detected, with a standard deviation of 15.3%. Damage classification results were evaluated against the Global Earth Observation - Catastrophe Assessment Network (GEO-CAN) and Earthquake Engineering Field Investigation Team (EEFIT) truth assessments. The combined overall classification accuracy for all six sites was 68.3%, with a standard deviation of 9.6%. Results were impacted by imperfect validation data, inclusion of non-building points, and very diverse environments, e.g., varying building types, sizes, and densities. Nevertheless, the processes exhibited significant potential for detecting buildings and assessing building-level damage

    OBJECT-BASED CHANGE DETECTION USING HIGH-RESOLUTION REMOTELY SENSED DATA AND GIS

    Get PDF

    A Study of Types of Sensors used in Remote Sensing

    Get PDF
    Of late, the science of Remote Sensing has been gaining a lot of interest and attention due to its wide variety of applications. Remotely sensed data can be used in various fields such as medicine, agriculture, engineering, weather forecasting, military tactics, disaster management etc. only to name a few. This article presents a study of the two categories of sensors namely optical and microwave which are used for remotely sensing the occurrence of disasters such as earthquakes, floods, landslides, avalanches, tropical cyclones and suspicious movements. The remotely sensed data acquired either through satellites or through ground based- synthetic aperture radar systems could be used to avert or mitigate a disaster or to perform a post-disaster analysis

    A Study of Types of Sensors used in Remote Sensing

    Get PDF
    Of late, the science of Remote Sensing has been gaining a lot of interest and attention due to its wide variety of applications. Remotely sensed data can be used in various fields such as medicine, agriculture, engineering, weather forecasting, military tactics, disaster management etc. only to name a few. This article presents a study of the two categories of sensors namely optical and microwave which are used for remotely sensing the occurrence of disasters such as earthquakes, floods, landslides, avalanches, tropical cyclones and suspicious movements. The remotely sensed data acquired either through satellites or through ground based- synthetic aperture radar systems could be used to avert or mitigate a disaster or to perform a post-disaster analysis

    Assessment of high resolution SAR imagery for mapping floodplain water bodies: a comparison between Radarsat-2 and TerraSAR-X

    Get PDF
    Flooding is a world-wide problem that is considered as one of the most devastating natural hazards. New commercially available high spatial resolution Synthetic Aperture RADAR satellite imagery provides new potential for flood mapping. This research provides a quantitative assessment of high spatial resolution RADASAT-2 and TerraSAR-X products for mapping water bodies in order to help validate products that can be used to assist flood disaster management. An area near Dhaka in Bangladesh is used as a test site because of the large number of water bodies of different sizes and its history of frequent flooding associated with annual monsoon rainfall. Sample water bodies were delineated in the field using kinematic differential GPS to train and test automatic methods for water body mapping. SAR sensors products were acquired concurrently with the field visits; imagery were acquired with similar polarization, look direction and incidence angle in an experimental design to evaluate which has best accuracy for mapping flood water extent. A methodology for mapping water areas from non-water areas was developed based on radar backscatter texture analysis. Texture filters, based on Haralick occurrence and co-occurrence measures, were compared and images classified using supervised, unsupervised and contextual classifiers. The evaluation of image products is based on an accuracy assessment of error matrix method using randomly selected ground truth data. An accuracy comparison was performed between classified images of both TerraSAR-X and Radarsat-2 sensors in order to identify any differences in mapping floods. Results were validated using information from field inspections conducted in good conditions in February 2009, and applying a model-assisted difference estimator for estimating flood area to derive Confidence Interval (CI) statistics at the 95% Confidence Level (CL) for the area mapped as water. For Radarsat-2 Ultrafine, TerraSAR-X Stripmap and Spotlight imagery, overall classification accuracy was greater than 93%. Results demonstrate that small water bodies down to areas as small as 150m² can be identified routinely from 3 metre resolution SAR imagery. The results further showed that TerraSAR-X stripmap and spotlight images have better overall accuracy than RADARSAT-2 ultrafine beam modes images. The expected benefits of the research will be to improve the provision of data to assess flood risk and vulnerability, thus assisting in disaster management and post-flood recovery

    Scalable and Extensible Augmented Reality with Applications in Civil Infrastructure Systems.

    Full text link
    In Civil Infrastructure System (CIS) applications, the requirement of blending synthetic and physical objects distinguishes Augmented Reality (AR) from other visualization technologies in three aspects: 1) it reinforces the connections between people and objects, and promotes engineers’ appreciation about their working context; 2) It allows engineers to perform field tasks with the awareness of both the physical and synthetic environment; 3) It offsets the significant cost of 3D Model Engineering by including the real world background. The research has successfully overcome several long-standing technical obstacles in AR and investigated technical approaches to address fundamental challenges that prevent the technology from being usefully deployed in CIS applications, such as the alignment of virtual objects with the real environment continuously across time and space; blending of virtual entities with their real background faithfully to create a sustained illusion of co- existence; integrating these methods to a scalable and extensible computing AR framework that is openly accessible to the teaching and research community, and can be readily reused and extended by other researchers and engineers. The research findings have been evaluated in several challenging CIS applications where the potential of having a significant economic and social impact is high. Examples of validation test beds implemented include an AR visual excavator-utility collision avoidance system that enables spotters to ”see” buried utilities hidden under the ground surface, thus helping prevent accidental utility strikes; an AR post-disaster reconnaissance framework that enables building inspectors to rapidly evaluate and quantify structural damage sustained by buildings in seismic events such as earthquakes or blasts; and a tabletop collaborative AR visualization framework that allows multiple users to observe and interact with visual simulations of engineering processes.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/96145/1/dsuyang_1.pd
    corecore