132 research outputs found

    Enhanced K-means Color Clustering Based on SLIC Superpixels Merging incorporated within the Entomology Software: AInsectID

    Get PDF
    Superpixel-based segmentation is an important pre-processing step for the simplification of image processing. The subjective nature behind the determination of optimal cluster numbers in segmentation algorithms can result in either underor over-segmentation burdens, depending on the image type. Insect wings, with their intricate color patterns, pose significant challenges for the accurate capture of color diversity in clustering algorithms, assuming a spherical and isotropic cluster distribution is used. This paper introduces a hybrid approach for color clustering in insect wings, integrating the Simple Linear Iterative Clustering (SLIC) method to generate the initial superpixels, and a DeltaE 2000 function the precisely discriminated merging of superpixels. Color differences between superpixels serve to measure homogeneity during the merging process. The proposed new algorithm demonstrates enhanced segmentation as it overcomes the issue of over-segmentation and under-segmentation, as evidenced by the results derived from the Boundary Recall, Rand index, Under-segmentation Error, and Bhattacharyya distance using ground truth data. The Silhouette score and Dunn Index are also used to quantitatively evaluate the efficacy of our new proposed clustering technique.<br/

    Artificial Intelligence Based Classification for Urban Surface Water Modelling

    Get PDF
    Estimations and predictions of surface water runoff can provide very useful insights, regarding flood risks in urban areas. To automatically predict the flow behaviour of the rainfall-runoff water, in real-world satellite images, it is important to precisely identify permeable and impermeable areas. This identification indicates and helps to calculate the amount of surface water, by taking into account the amount of water being absorbed in a permeable area and what remains on the impermeable area. In this research, a model of surface water has been established, to predict the behavioural flow of rainfall-runoff water. This study employs a combination of image processing, artificial intelligence and machine learning techniques, for automatic segmentation and classification of permeable and impermeable areas, in satellite images. These techniques investigate the image classification approaches for classifying three land-use categories (roofs, roads, and pervious areas), commonly found in satellite images of the earth’s surface. Three different classification scenarios are investigated, to select the best classification model. The first scenario involves pixel by pixel classification of images, using Classification Tree and Random Forest classification techniques, in 2 different settings of sequential and parallel execution of algorithms. In the second classification scenario, the image is divided into objects, by using Superpixels (SLIC) segmentation method, while three kinds of feature sets are extracted from the segmented objects. The performance of eight different supervised machine learning classifiers is probed, using 5-fold cross-validation, for multiple SLIC values, while detailed performance comparisons lead to conclusions about the classification into different classes, regarding Object-based and Pixel-based classification schemes. Pareto analysis and Knee point selection are used to select SLIC value and the suitable type of classification, among the aforementioned two. Furthermore, a new diversity and weighted sum-based ensemble classification model, called ParetoEnsemble, is proposed, in this classification scenario. The weights are applied to selected component classifiers of an ensemble, creating a strong classifier, where classification is done based on multiple votes from candidate classifiers of the ensemble, as opposed to individual classifiers, where classification is done based on a single vote, from only one classifier. Unbalanced and balanced data-based classification results are also evaluated, to determine the most suitable mode, for satellite image classifications, in this study. Convolutional Neural Networks, based on semantic segmentation, are also employed in the classification phase, as a third scenario, to evaluate the strength of deep learning model SegNet, in the classification of satellite imaging. The best results, from the three classification scenarios, are compared and the best classification method, among the three scenarios, is used in the next phase of water modelling, with the InfoWorks ICM software, to explore the potential of modelling process, regarding a partially automated surface water network. By using the parameter settings, with a specified amount of simulated rain falling, onto the imaged area, the amount of surface water flow is estimated, to get predictions about runoff situations in urban areas, since runoff, in such a situation, can be high enough to pose a dangerous flood risk. The area of Feock, in Cornwall, is used as a simulation area of study, in this research, where some promising results have been derived, regarding classification and modelling of runoff. The correlation coefficient estimation, between classification and runoff accuracy, provides useful insight, regarding the dependence of runoff performance on classification performance. The trained system was tested on some unknown area images as well, demonstrating a reasonable performance, considering the training and classification limitations and conditions. Furthermore, in these unknown area images, reasonable estimations were derived, regarding surface water runoff. An analysis of unbalanced and balanced data-based classification and runoff estimations, for multiple parameter configurations, provides aid to the selection of classification and modelling parameter values, to be used in future unknown data predictions. This research is founded on the incorporation of satellite imaging into water modelling, using selective images for analysis and assessment of results. This system can be further improved, and runoff predictions of high precision can be better achieved, by adding more high-resolution images to the classifiers training. The added variety, to the trained model, can lead to an even better classification of any unknown image, which could eventually provide better modelling and better insights into surface water modelling. Moreover, the modelling phase can be extended, in future research, to deal with real-time parameters, by calibrating the model, after the classification phase, in order to observe the impact of classification on the actual calibration

    GEOBIA 2016 : Solutions and Synergies., 14-16 September 2016, University of Twente Faculty of Geo-Information and Earth Observation (ITC): open access e-book

    Get PDF

    Markov rasgele alanları aracılığı ile anlam bilgisi ve imge bölĂŒtlemenin birleƟtirilmesi.

    Get PDF
    The formulation of image segmentation problem is evolved considerably, from the early years of computer vision in 1970s to these years, in 2010s. While the initial studies offer mostly unsupervised approaches, a great deal of recent studies shift towards the supervised solutions. This is due to the advancements in the cognitive science and its influence on the computer vision research. Also, accelerated availability of computational power enables the researchers to develop complex algorithms. Despite the great effort on the image segmentation research, the state of the art techniques still fall short to satisfy the need of the further processing steps of computer vision. This study is another attempt to generate a “substantially complete” segmentation output for the consumption of object classification, recognition and detection steps. Our approach is to fuse the multiple segmentation outputs in order to achieve the “best” result with respect to a cost function. The proposed approach, called Boosted-MRF, elegantly formulates the segmentation fusion problem as a Markov Random Fields (MRF) model in an unsupervised framework. For this purpose, a set of initial segmentation outputs is obtained and the consensus among the segmentation partitions are formulated in the energy function of the Markov Random Fields model. Finally, minimization of the energy function yields the “best” consensus among the segmentation ensemble. We proceed one step further to improve the performance of the Boosted-MRF by introducing some auxiliary domain information into the segmentation fusion process. This enhanced segmentation fusion method, called the Domain Specific MRF, updates the energy function of the MRF model by the available information which is received from a domain expert. For this purpose, a top-down segmentation method is employed to obtain a set of Domain Specific Segmentation Maps which are incomplete segmentations of a given image. Therefore, in this second segmentation fusion method, in addition to the set of bottom-up segmentation ensemble, we generate ensemble of top-down Domain Specific Segmentation Maps. Based on the bottom–up and top down segmentation ensembles a new MRF energy function is defined. Minimization of this energy function yields the “best” consensus which is consistent with the domain specific information. The experiments performed on various datasets show that the proposed segmentation fusion methods improve the performances of the segmentation outputs in the ensemble measured with various indexes, such as Probabilistic Rand Index, Mutual Information. The Boosted-MRF method is also compared to a popular segmentation fusion method, namely, Best of K. The Boosted-MRF is slightly better than the Best of K method. The suggested Domain Specific-MRF method is applied on a set of outdoor images with vegetation where vegetation information is utilized as domain specific information. A slight improvement in the performance is recorded in this experiment. The method is also applied on remotely sensed dataset of building images, where more advanced domain specific information is available. The segmentation performance is evaluated with a performance measure which is specifically defined to estimate the segmentation performance for building images. In these two experiments with the Domain Specific-MRF method, it is observed that, as long as reliable domain specific information is available, the segmentation performance improves significantly.Ph.D. - Doctoral Progra

    Recognition and Reconstruction of Transparent Objects for Augmented Reality

    Get PDF

    Learning to Segment Breast Biopsy Whole Slide Images

    Full text link
    We trained and applied an encoder-decoder model to semantically segment breast biopsy images into biologically meaningful tissue labels. Since conventional encoder-decoder networks cannot be applied directly on large biopsy images and the different sized structures in biopsies present novel challenges, we propose four modifications: (1) an input-aware encoding block to compensate for information loss, (2) a new dense connection pattern between encoder and decoder, (3) dense and sparse decoders to combine multi-level features, (4) a multi-resolution network that fuses the results of encoder-decoders run on different resolutions. Our model outperforms a feature-based approach and conventional encoder-decoders from the literature. We use semantic segmentations produced with our model in an automated diagnosis task and obtain higher accuracies than a baseline approach that employs an SVM for feature-based segmentation, both using the same segmentation-based diagnostic features.Comment: Added more WSI images in appendi

    UAV-Enabled Surface and Subsurface Characterization for Post-Earthquake Geotechnical Reconnaissance

    Full text link
    Major earthquakes continue to cause significant damage to infrastructure systems and the loss of life (e.g. 2016 Kaikoura, New Zealand; 2016 Muisne, Ecuador; 2015 Gorkha, Nepal). Following an earthquake, costly human-led reconnaissance studies are conducted to document structural or geotechnical damage and to collect perishable field data. Such efforts are faced with many daunting challenges including safety, resource limitations, and inaccessibility of sites. Unmanned Aerial Vehicles (UAV) represent a transformative tool for mitigating the effects of these challenges and generating spatially distributed and overall higher quality data compared to current manual approaches. UAVs enable multi-sensor data collection and offer a computational decision-making platform that could significantly influence post-earthquake reconnaissance approaches. As demonstrated in this research, UAVs can be used to document earthquake-affected geosystems by creating 3D geometric models of target sites, generate 2D and 3D imagery outputs to perform geomechanical assessments of exposed rock masses, and characterize subsurface field conditions using techniques such as in situ seismic surface wave testing. UAV-camera systems were used to collect images of geotechnical sites to model their 3D geometry using Structure-from-Motion (SfM). Key examples of lessons learned from applying UAV-based SfM to reconnaissance of earthquake-affected sites are presented. The results of 3D modeling and the input imagery were used to assess the mechanical properties of landslides and rock masses. An automatic and semi-automatic 2D fracture detection method was developed and integrated with a 3D, SfM, imaging framework. A UAV was then integrated with seismic surface wave testing to estimate the shear wave velocity of the subsurface materials, which is a critical input parameter in seismic response of geosystems. The UAV was outfitted with a payload release system to autonomously deliver an impulsive seismic source to the ground surface for multichannel analysis of surface waves (MASW) tests. The UAV was found to offer a mobile but higher-energy source than conventional seismic surface wave techniques and is the foundational component for developing the framework for fully-autonomous in situ shear wave velocity profiling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145793/1/wwgreen_1.pd

    Context-Based Filtering of Noisy Labels for Automatic Basemap Updating From UAV Data

    Get PDF
    Unmanned aerial vehicles (UAVs) have the potential to obtain high-resolution aerial imagery at frequent intervals, making them a valuable tool for urban planners who require up-to-date basemaps. Supervised classification methods can be exploited to translate the UAV data into such basemaps. However, these methods require labeled training samples, the collection of which may be complex and time consuming. Existing spatial datasets can be exploited to provide the training labels, but these often contain errors due to differences in the date or resolution of the dataset from which these outdated labels were obtained. In this paper, we propose an approach for updating basemaps using global and local contextual cues to automatically remove unreliable samples from the training set, and thereby, improve the classification accuracy. Using UAV datasets over Kigali, Rwanda, and Dar es Salaam, Tanzania, we demonstrate how the amount of mislabeled training samples can be reduced by 44.1% and 35.5%, respectively, leading to a classification accuracy of 92.1% in Kigali and 91.3% in Dar es Salaam. To achieve the same accuracy in Dar es Salaam, between 50000 and 60000 manually labeled image segments would be needed. This demonstrates that the proposed approach of using outdated spatial data to provide labels and iteratively removing unreliable samples is a viable method for obtaining high classification accuracies while reducing the costly step of acquiring labeled training samples
    • 

    corecore