2,183 research outputs found

    Hierarchical Image Segmentation using The Watershed Algorithim with A Streaming Implementation

    Get PDF
    We have implemented a graphical user interface (GUI) based semi-automatic hierarchical segmentation scheme, which works in three stages. In the first stage, we process the original image by filtering and threshold the gradient to reduce the level of noise. In the second stage, we compute the watershed segmentation of the image using the rainfalling simulation approach. In the third stage, we apply two region merging schemes, namely implicit region merging and seeded region merging, to the result of the watershed algorithm. Both the region merging schemes are based on the watershed depth of regions and serve to reduce the over segmentation produced by the watershed algorithm. Implicit region merging automatically produces a hierarchy of regions. In seeded region merging, a selected seed region can be grown from the watershed result, producing a hierarchy. A meaningful segmentation can be simply chosen from the hierarchy produced. We have also proposed and tested a streaming algorithm based on the watershed algorithm, which computes the segmentation of an image without iterative processing of adjacent blocks. We have proved that the streaming algorithm produces the same result as the serial watershed algorithm. We have also discussed the extensibility of the streaming algorithm to efficient parallel implementations

    The Spine of the Cosmic Web

    Get PDF
    We present the SpineWeb framework for the topological analysis of the Cosmic Web and the identification of its walls, filaments and cluster nodes. Based on the watershed segmentation of the cosmic density field, the SpineWeb method invokes the local adjacency properties of the boundaries between the watershed basins to trace the critical points in the density field and the separatrices defined by them. The separatrices are classified into walls and the spine, the network of filaments and nodes in the matter distribution. Testing the method with a heuristic Voronoi model yields outstanding results. Following the discussion of the test results, we apply the SpineWeb method to a set of cosmological N-body simulations. The latter illustrates the potential for studying the structure and dynamics of the Cosmic Web.Comment: Accepted for publication HIGH-RES version: http://skysrv.pha.jhu.edu/~miguel/SpineWeb

    The geometry of large Arctic tundra lakes observed in historical maps and satellite images

    Get PDF
    The climate of the Arctic is warming rapidly and this is causing major changes to the cycling of carbon and the distribution of permafrost in this region. Tundra lakes are key components of the Arctic climate system because they represent a source of methane to the atmosphere. In this paper, we aim to analyze the geometry of the patterns formed by large (> 0.8 km2) tundra lakes in the Russian High Arctic. We have studied images of tundra lakes in historical maps from the State Hydrological Institute, Russia (date 1977; scale 0.21166 km/pixel) and in Landsat satellite images derived from the Google Earth Engine (G.E.E.; date 2016; scale 0.1503 km/pixel). The G.E.E. is a cloud-based platform for planetary-scale geospatial analysis on over four decades of Landsat data. We developed an image-processing algorithm to segment these maps and images, measure the area and perimeter of each lake, and compute the fractal dimension of the lakes in the images we have studied. Our results indicate that as lake size increases, their fractal dimension bifurcates. For lakes observed in historical maps, this bifurcation occurs among lakes larger than 100 km2 (fractal dimension 1.43 to 1.87). For lakes observed in satellite images this bifurcation occurs among lakes larger than ∼100 km2 (fractal dimension 1.31 to 1.95). Tundra lakes with a fractal dimension close to 2 have a tendency to be self-similar with respect to their area–perimeter relationships. Area–perimeter measurements indicate that lakes with a length scale greater than 70 km2 are power-law distributed. Preliminary analysis of changes in lake size over time in paired lakes (lakes that were visually matched in both the historical map and the satellite imagery) indicate that some lakes in our study region have increased in size over time, whereas others have decreased in size over time. Lake size change during this 39-year time interval can be up to half the size of the lake as recorded in the historical map

    Analyses of the Watershed Transform

    Get PDF
    International audienceIn the framework of mathematical morphology, watershed transform (WT) represents a key step in image segmentation procedure. In this paper, we present a thorough analysis of some existing watershed approaches in the discrete case: WT based on flooding, WT based on path-cost minimization, watershed based on topology preservation, WT based on local condition and WT based on minimum spanning forest. For each approach, we present detailed description of processing procedure followed by mathematical foundations and algorithm of reference. Recent publications based on some approaches are also presented and discussed. Our study concludes with a classification of different watershed transform algorithms according to solution uniqueness, topology preservation, prerequisites minima computing and linearity

    Analysis of lesion border segmentation using watershed algorithm

    Get PDF
    Automatic lesion segmentation is an important part of computer-based skin cancer detection. A watershed algorithm was introduced and tested on benign and melanoma images. The average of three dermatologists\u27 manually drawn borders was compared as the benchmark. Hair removing, black border removing and vignette removing methods were introduced in preprocessing steps. A new lesion ratio estimate was added to the merging method, which was determined by the outer bounding box ratio. In postprocessing, small blob removing and border smoothing using a peninsula removing method as well as a second order B-Spline smoothing method were included. A novel threshold was developed for removing large light areas near the lesion boundary. A supervised neural network was applied to cluster results and improve the accuracy, classifying images into three clusters: proper estimate, over-estimate and under-estimate. Comparing to the manually drawn average border, an overall of 11.12% error was achieved. Future work will involve reducing peninsula-shaped noise and looking for other reliable features for the classifier --Abstract, page iii

    Local Variation as a Statistical Hypothesis Test

    Full text link
    The goal of image oversegmentation is to divide an image into several pieces, each of which should ideally be part of an object. One of the simplest and yet most effective oversegmentation algorithms is known as local variation (LV) (Felzenszwalb and Huttenlocher 2004). In this work, we study this algorithm and show that algorithms similar to LV can be devised by applying different statistical models and decisions, thus providing further theoretical justification and a well-founded explanation for the unexpected high performance of the LV approach. Some of these algorithms are based on statistics of natural images and on a hypothesis testing decision; we denote these algorithms probabilistic local variation (pLV). The best pLV algorithm, which relies on censored estimation, presents state-of-the-art results while keeping the same computational complexity of the LV algorithm

    A Knowledge-Based Approach to Raster-Vector Conversion of Large Scale Topographic Maps

    Get PDF
    Paper-based raster maps are primarily for human consumption, and their interpretation always requires some level of human expertese. Todays computer services in geoinformatics usually require vectorized topographic maps. The usual method of the conversion has been an error-prone, manual process. In this article, the possibilities, methods and difficulties of the conversion are discussed. The results described here are partially implemented in the IRIS project, but further work remains. This emphasizes the tools of digital image processing and knowledge-based approach. The system in development separates the recognition of point-like, line-like and surface-like objects, and the most successful approach appears to be the recognition of these objects in a reversed order with respect to their printing. During the recongition of surfaces, homogeneous and textured surfaces must be distinguished. The most diverse and complicated group constitute the line-like objects. The IRIS project realises a moderate, but significant step towards the automatization of map recognition process, bearing in mind that full automatization is unlikely. It is reasonable to assume that human experts will always be required for high quality interpretation, but it is an exciting challenge to decrease the burden of manual work

    GeoAI-enhanced Techniques to Support Geographical Knowledge Discovery from Big Geospatial Data

    Get PDF
    abstract: Big data that contain geo-referenced attributes have significantly reformed the way that I process and analyze geospatial data. Compared with the expected benefits received in the data-rich environment, more data have not always contributed to more accurate analysis. “Big but valueless” has becoming a critical concern to the community of GIScience and data-driven geography. As a highly-utilized function of GeoAI technique, deep learning models designed for processing geospatial data integrate powerful computing hardware and deep neural networks into various dimensions of geography to effectively discover the representation of data. However, limitations of these deep learning models have also been reported when People may have to spend much time on preparing training data for implementing a deep learning model. The objective of this dissertation research is to promote state-of-the-art deep learning models in discovering the representation, value and hidden knowledge of GIS and remote sensing data, through three research approaches. The first methodological framework aims to unify varied shadow into limited number of patterns, with the convolutional neural network (CNNs)-powered shape classification, multifarious shadow shapes with a limited number of representative shadow patterns for efficient shadow-based building height estimation. The second research focus integrates semantic analysis into a framework of various state-of-the-art CNNs to support human-level understanding of map content. The final research approach of this dissertation focuses on normalizing geospatial domain knowledge to promote the transferability of a CNN’s model to land-use/land-cover classification. This research reports a method designed to discover detailed land-use/land-cover types that might be challenging for a state-of-the-art CNN’s model that previously performed well on land-cover classification only.Dissertation/ThesisDoctoral Dissertation Geography 201

    Automated, on-board terrain analysis for precision landings

    Get PDF
    Advances in space robotics technology hinge to a large extent upon the development and deployment of sophisticated new vision-based methods for automated in-space mission operations and scientific survey. To this end, we have developed a new concept for automated terrain analysis that is based upon a generic image enhancement platform|multi-scale retinex (MSR) and visual servo (VS) processing. This pre-conditioning with the MSR and the vs produces a "canonical" visual representation that is largely independent of lighting variations, and exposure errors. Enhanced imagery is then processed with a biologically inspired two-channel edge detection process, followed by a smoothness based criteria for image segmentation. Landing sites can be automatically determined by examining the results of the smoothness-based segmentation which shows those areas in the image that surpass a minimum degree of smoothness. Though the msr has proven to be a very strong enhancement engine, the other elements of the approach|the vs, terrain map generation, and smoothness-based segmentation|are in early stages of development. Experimental results on data from the Mars Global Surveyor show that the imagery can be processed to automatically obtain smooth landing sites. In this paper, we describe the method used to obtain these landing sites, and also examine the smoothness criteria in terms of the imager and scene characteristics. Several examples of applying this method to simulated and real imagery are shown
    corecore