15 research outputs found

    The use of logistic model tree (LMT) for pixel- and object-based classifications using high-resolution WorldView-2 imagery

    No full text
    Logistic model tree (LMT), a new method integrating standard decision tree (DT) induction and linear logistic regression algorithm in a single tree, have been recently proposed as an alternative to DT-based learning algorithms. In this study, the LMT was applied in the context of pixel- and object-based classifications using high-resolution WorldView-2 imagery, and its performance was compared with C4.5, random forest and Adaboost. Results of the study showed that the LMT generally produced more accurate classification results than the other methods for both pixel- and object-based classifications. The improvement in classification accuracy reached to 3% in pixel-based and 5% in object-based classifications. It was also estimated that the LMT algorithm produced the most accurate results considering the allocation and overall disagreement errors. Based on the Wilcoxon’s Signed-Ranks tests, the performance differences between the LMT and the other methods were statistically significant for both pixel- and object-based image classifications

    Google Earth Engine for Monitoring Marine Mucilage: Izmit Bay in Spring 2021

    No full text
    Global warming together with environmental pollution threatens marine habitats and causes an increasing number of environmental disasters. Periodic monitoring of coastal water quality is of critical importance for the effective management of water resources and the sustainability of marine ecosystems. The use of remote sensing technologies provides significant benefits for detecting, monitoring, and analyzing rapidly occurring and displaced natural phenomena, including mucilage events. In this study, five water indices estimated from cloud-free and partly cloudy Sentinel-2 images acquired from May to July 2021 were employed to effectively map mucilage aggregates on the sea surface in the Izmit Bay using the cloud-based Google Earth Engine (GEE) platform. Results showed that mucilage aggregates started with the coverage of about 6 km² sea surface on 14 May, reached the highest level on 24 May and diminished at the end of July. Among the applied indices, the Adjusted Floating Algae Index (AFAI) was superior for producing the mucilage maps even for the partly cloudy image, followed by Normalized Difference Turbidity Index (NDTI) and Mucilage Index (MI). To be more specific, indices using green channel were found to be inferior for extracting mucilage information from the satellite images

    Design of Feedforward Neural Networks in the Classification of Hyperspectral Imagery Using Superstructural Optimization

    No full text
    Artificial Neural Networks (ANNs) have been used in a wide range of applications for complex datasets with their flexible mathematical architecture. The flexibility is favored by the introduction of a higher number of connections and variables, in general. However, over-parameterization of the ANN equations and the existence of redundant input variables usually result in poor test performance. This paper proposes a superstructure-based mixed-integer nonlinear programming method for optimal structural design including neuron number selection, pruning, and input selection for multilayer perceptron (MLP) ANNs. In addition, this method uses statistical measures such as the parameter covariance matrix in order to increase the test performance while permitting reduced training performance. The suggested approach was implemented on two public hyperspectral datasets (with 10% and 50% sampling ratios), namely Indian Pines and Pavia University, for the classification problem. The test results revealed promising performances compared to the standard fully connected neural networks in terms of the estimated overall and individual class accuracies. With the application of the proposed superstructural optimization, fully connected networks were pruned by over 60% in terms of the total number of connections, resulting in an increase of 4% for the 10% sampling ratio and a 1% decrease for the 50% sampling ratio. Moreover, over 20% of the spectral bands in the Indian Pines data and 30% in the Pavia University data were found statistically insignificant, and they were thus removed from the MLP networks. As a result, the proposed method was found effective in optimizing the architectural design with high generalization capabilities, particularly for fewer numbers of samples. The analysis of the eliminated spectral bands revealed that the proposed algorithm mostly removed the bands adjacent to the pre-eliminated noisy bands and highly correlated bands carrying similar information

    A comparative assessment of canonical correlation forest, random forest, rotation forest and logistic regression methods for landslide susceptibility mapping

    No full text
    In recent years, ensemble learning methods have become popular in landslide susceptibility mapping (LSM) with varying degrees of success. Within classifier ensemble concept, decision tree based ensemble learners such as random forest (RF) (i.e. decision forest) and rotation forest (RotFor) have gained a great interest due to their robustness against conventional statistical methods. This study proposes canonical correlation forest (CCF), a new member of ensemble learning family, in the prediction of landslide susceptibility for Yenice district of Karabuk in Turkey. To test the robustness and suitability of the CCF method, its prediction performance was compared to two well-known machine learning ensemble algorithms, RF and RotFor, and a commonly used statistical method, the logistic regression (LR). Furthermore, the effects of variations in ratio of training/testing datasets were assessed on the performances of RF, CCF, RotFor and LR models using the root-mean square error (RMSE). The quality of resulting landslide susceptibility maps was evaluated using overall accuracy (OA), Kappa coefficient (KC), success rate curves and receiver operating characteristic (ROC) curves. Wilcoxon’s signed rank test was also applied to measure the statistical differences of the accuracies of susceptibility maps. The estimated area under curve (AUC) values for RF, CCF, RotFor and LR models were 0.982, 0.970, 0.966 and 0.826, respectively. It was clear that ensemble learning algorithms outperformed the LR method. The results also showed that selection of sampling ratio had significant effect on model performance of RF, CCF, RotFor and LR models, and the lowest RMSE values were estimated with the use of 70:30 ratio for training and test datasets

    A case study for an airport information system

    No full text
    El uso activo de los sistemas de información proporciona acceso fácil y más rápido a los datos. Ayuda, además, a proporcionar servicios de mejor calidad a los usuarios en muchos campos. En este estudio, el papel de los sistemas de información se investiga para la mejora de la calidad profesional de la gerencia y del servicio de los aeropuertos, lo cual redunda en demostraciones de los niveles del desarrollo de un país y de su integración con el resto del mundo. En vista de su tamaño y capacidad del tráfico aéreo, el aeropuerto de Ataturk en Turquía se elige para esta investigación. Para el diseño y la creación del sistema de información del aeropuerto de Ataturk, los datos espaciales requeridos fueron recogidos y digitalizados, y después combinados con los atributos de dichos datos. Cuando el sistema llegue a ser completamente funcional, proporcionará ventajas considerables a los usuarios y a la gerencia aeroportuaria mejorando la eficiencia operacional y productividad.29-38Starting with the active use of information systems that provide easy and faster access to data, they help to provide better and more quality services to their users in many fields. In this study, the role of information systems is investigated for professional management and service quality improvement of airports, which to some extent shows the levels of the development of a country and the integration with the rest of the world. Considering its size and air traffic capacity, Ataturk airport in Turkey is chosen for this research. For the design and creation of Ataturk airport information system, required spatial data were collected and digitized, and then combined with the attribute data. When the system becomes fully functional, it will provide considerable benefits to users and management by improving operational efficiency and productivity

    Shared Blocks-Based Ensemble Deep Learning for Shallow Landslide Susceptibility Mapping

    No full text
    Natural disaster impact assessment is of the utmost significance for post-disaster recovery, environmental protection, and hazard mitigation plans. With their recent usage in landslide susceptibility mapping, deep learning (DL) architectures have proven their efficiency in many scientific studies. However, some restrictions, including insufficient model variance and limited generalization capabilities, have been reported in the literature. To overcome these restrictions, ensembling DL models has often been preferred as a practical solution. In this study, an ensemble DL architecture, based on shared blocks, was proposed to improve the prediction capability of individual DL models. For this purpose, three DL models, namely Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM), together with their ensemble form (CNN–RNN–LSTM) were utilized to model landslide susceptibility in Trabzon province, Turkey. The proposed DL architecture produced the highest modeling performance of 0.93, followed by CNN (0.92), RNN (0.91), and LSTM (0.86). Findings proved that the proposed model excelled the performance of the DL models by up to 7% in terms of overall accuracy, which was also confirmed by the Wilcoxon signed-rank test. The area under curve analysis also showed a significant improvement (~4%) in susceptibility map accuracy by the proposed strategy

    Shared Blocks-Based Ensemble Deep Learning for Shallow Landslide Susceptibility Mapping

    No full text
    Natural disaster impact assessment is of the utmost significance for post-disaster recovery, environmental protection, and hazard mitigation plans. With their recent usage in landslide susceptibility mapping, deep learning (DL) architectures have proven their efficiency in many scientific studies. However, some restrictions, including insufficient model variance and limited generalization capabilities, have been reported in the literature. To overcome these restrictions, ensembling DL models has often been preferred as a practical solution. In this study, an ensemble DL architecture, based on shared blocks, was proposed to improve the prediction capability of individual DL models. For this purpose, three DL models, namely Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Long Short-Term Memory (LSTM), together with their ensemble form (CNN–RNN–LSTM) were utilized to model landslide susceptibility in Trabzon province, Turkey. The proposed DL architecture produced the highest modeling performance of 0.93, followed by CNN (0.92), RNN (0.91), and LSTM (0.86). Findings proved that the proposed model excelled the performance of the DL models by up to 7% in terms of overall accuracy, which was also confirmed by the Wilcoxon signed-rank test. The area under curve analysis also showed a significant improvement (~4%) in susceptibility map accuracy by the proposed strategy

    Investigation of automatic feature weighting methods (Fisher, Chi-square and Relief-F) for landslide susceptibility mapping

    No full text
    In landslide susceptibility mapping, factor weights have been usually determined by expert judgements. A novel methodology for weighting landslide causative factors by integrating statistical feature weighting algorithms was proposed. The primary focus of this study is to investigate the effectiveness of automatic feature weighting algorithms, namely Fisher, Chi-square and Relief-F algorithms. Analytic hierarchy process (AHP) method was used as a benchmark method to compare the performances of the weighting algorithms. All weighted factors were tested using factor-weighted overlay method, and quality of these maps was assessed using overall accuracy, area under the ROC curve (AUC) and success rate curve. In addition, Wilcoxon’s signed-rank test was applied to evaluate statistical differences between both estimated overall accuracies and AUCs, respectively. Results showed that the weights determined by feature weighting methods outperformed the conventional AHP method by about 6% and this level of differences was found to be statistically significant
    corecore