218 research outputs found

    Voronoi image segmentation and its applications to geoinformatics

    Get PDF
    As various geospatial images are available for analysis, there is a strong need for an intelligent geospatial image processing method. Segmenting and districting digital images is a core process and is of great importance in many geo-related applications. We propose a flexible image segmentation framework based on generalized Voronoi diagrams through Euclidean distance transforms. We introduce a three-scan algorithm that segments images in O(N) time when N is the number of pixels. The algorithm is capable of handling generators of complex types (point, line and area), Minkowski metrics and different weights. This paper also provides applications of the proposed method in various geoinformation datasets. Illustrated examples demonstrate the usefulness and robustness of our proposed method

    Parallelizing Scale Invariant Feature Transform on a Distributed Memory Cluster

    Get PDF
    Scale Invariant Feature Transform (SIFT) is a computer vision algorithm that is widely-used to extract features from images. We explored accelerating an existing implementation of this algorithm with message passing in order to analyze large data sets. We successfully tested two approaches to data decomposition in order to parallelize SIFT on a distributed memory cluster

    On-line data archives

    Get PDF
    ©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Digital libraries and other large archives of electronically retrievable and manipulable material are becoming widespread in both commercial and scientific arenas. Advances in networking technologies have led to a greater proliferation of wide-area distributed data warehousing with associated data management challenges. We review tools and technologies for supporting distributed on-line data archives and explain our key concept of active data archives, in which data can be, processed on-demand before delivery. We are developing wide-area data warehousing software infrastructure for geographically distributed archives of large scientific data sets, such as satellite image data, that are stored hierarchically on disk arrays and tape silos and are accessed by a variety of scientific and decision support applications. Interoperability is a major issue for distributed data archives and requires standards for server interfaces and metadata. We review present activities and our contributions in developing such standards for different application areas.K. Hawick, P. Coddington, H. James, C. Patte

    An Augmentative Gaze Directing Framework for Multi-Spectral Imagery

    Get PDF
    Modern digital imaging techniques have made the task of imaging more prolic than ever and the volume of images and data available through multi-spectral imaging methods for exploitation is exceeding that which can be solely processed by human beings. The researchers proposed and developed a novel eye movement contingent framework and display system through adaption of the demonstrated technique of subtle gaze direction by presenting modulations within the displayed image. The system sought to augment visual search task performance of aerial imagery by incorporating multi-spectral image processing algorithms to determine potential regions of interest within an image. The exploratory work conducted was to study the feasibility of visual gaze direction with the specic intent of extending this application to geospatial image analysis without need for overt cueing to areas of potential interest and thereby maintaining the benefits of an undirected and unbiased search by an observer

    A reconfigurable component-based problem solving environment

    Get PDF
    ©2001 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.Problem solving environments are an attractive approach to the integration of calculation and management tools for various scientific and engineering applications. These applications often require high performance computing components in order to be computationally feasible. It is therefore a challenge to construct integration technology, suitable for problem solving environments, that allows both flexibility as well as the embedding of parallel and high performance computing systems. Our DISCWorld system is designed to meet these needs and provides a Java-based middleware to integrate component applications across wide-area networks. Key features of our design are the abilities to: access remotely stored data; compose complex processing requests either graphically or through a scripting language; execute components on heterogeneous and remote platforms; reconfigure task sub-graphs to run across multiple servers. Operators in task graphs can be slow (but portable) “pure Java” implementations or wrappers to fast (platform specific) supercomputer implementations.K. Hawick, H. James, P. Coddingto

    Rapid visual presentation to support geospatial big data processing

    Get PDF
    Given the limited number of human GIS/image analysts at any organization, use of their time and organizational resources is important, especially in light of Big Data application scenarios when organizations may be overwhelmed with vast amounts of geospatial data. The current manuscript is devoted to the description of experimental research outlining the concept of Human-Computer Symbiosis where computers perform tasks, such as classification on a large image dataset, and, in sequence, humans perform analysis with Brain-Computer Interfaces (BCIs) to classify those images that machine learning had difficulty with. The addition of the BCI analysis is to utilize the brain\u27s ability to better answer questions like: Is the object in this image the object being sought? In order to determine feasibility of such a system, a supervised multi-layer convolutional neural network (CNN) was trained to detect the difference between ships\u27 and no ships\u27 from satellite imagery data. A prediction layer was then added to the trained model to output the probability that a given image was within each of those two classifications. If the probabilities were within one standard deviation of the mean of a gaussian distribution centered at 0.5, they would be stored in a separate dataset for Rapid Serial Visual Presentations (RSVP), implemented with PsyhoPy, to a human analyst using a low cost EMOTIV Insight EEG BCI headset. During the RSVP phase, hundreds of images per minute can be sequentially demonstrated. At such a pace, human analysts are not capable of making any conscious decisions about what is in each image; however, the subliminal aha-moment still can be detected by the headset. The discovery of these moments are parsed out by exposition of Event Related Potentials (ERPs), specifically the P300 ERPs. If a P300 ERP is generated for detection of a ship, then the relevant image would be moved to its rightful designation dataset; otherwise, if the image classification is still unclear, it is set aside for another RSVP iteration where the time afforded to the analyst for observation of each image is increased each time. If classification is still uncertain after a respectable amount of RSVP iterations, the images in question would be located within the grid matrix of its larger image scene. The adjacent images to those of interest on the grid would then be added to the presentation to give an analyst more contextual information via the expanded field of view. If classification is still uncertain, one final expansion of the field of view is afforded. Lastly, if somehow the classification of the image is indeterminable, the image is stored in an archive dataset

    ARTMAP Neural Networks for Information Fusion and Data Mining: Map Production and Target Recognition Methodologies

    Full text link
    The Sensor Exploitation Group of MIT Lincoln Laboratory incorporated an early version of the ARTMAP neural network as the recognition engine of a hierarchical system for fusion and data mining of registered geospatial images. The Lincoln Lab system has been successfully fielded, but is limited to target I non-target identifications and does not produce whole maps. Procedures defined here extend these capabilities by means of a mapping method that learns to identify and distribute arbitrarily many target classes. This new spatial data mining system is designed particularly to cope with the highly skewed class distributions of typical mapping problems. Specification of canonical algorithms and a benchmark testbed has enabled the evaluation of candidate recognition networks as well as pre- and post-processing and feature selection options. The resulting mapping methodology sets a standard for a variety of spatial data mining tasks. In particular, training pixels are drawn from a region that is spatially distinct from the mapped region, which could feature an output class mix that is substantially different from that of the training set. The system recognition component, default ARTMAP, with its fully specified set of canonical parameter values, has become the a priori system of choice among this family of neural networks for a wide variety of applications.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); Office of Naval Research (N00014-01-1-0624

    Extracting Buildings from True Color Stereo Aerial Images Using a Decision Making Strategy

    Get PDF
    The automatic extraction of buildings from true color stereo aerial imagery in a dense built-up area is the main focus of this paper. Our approach strategy aimed at reducing the complexity of the image content by means of a three-step procedure combining reliable geospatial image analysis techniques. Even if it is a rudimentary first step towards a more general approach, the method presented proved useful in urban sprawl studies for rapid map production in flat area by retrieving indispensable information on buildings from scanned historic aerial photography. After the preliminary creation of a photogrammetric model to manage Digital Surface Model and orthophotos, five intermediate mask-layers data (Elevation, Slope, Vegetation, Shadow, Canny, Shadow, Edges) were processed through the combined use of remote sensing image processing and GIS software environments. Lastly, a rectangular building block model without roof structures (Level of Detail, LoD1) was automatically generated. System performance was evaluated with objective criteria, showing good results in a complex urban area featuring various types of building objects

    Quality Assessment of Landsat Surface Reflectance Products Using MODIS Data

    Get PDF
    Surface reflectance adjusted for atmospheric effects is a primary input for land cover change detection and for developing many higher level surface geophysical parameters. With the development of automated atmospheric correction algorithms, it is now feasible to produce large quantities of surface reflectance products using Landsat images. Validation of these products requires in situ measurements, which either do not exist or are difficult to obtain for most Landsat images. The surface reflectance products derived using data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS), however, have been validated more comprehensively. Because the MODIS on the Terra platform and the Landsat 7 are only half an hour apart following the same orbit, and each of the 6 Landsat spectral bands overlaps with a MODIS band, good agreements between MODIS and Landsat surface reflectance values can be considered indicators of the reliability of the Landsat products, while disagreements may suggest potential quality problems that need to be further investigated. Here we develop a system called Landsat-MODIS Consistency Checking System (LMCCS). This system automatically matches Landsat data with MODIS observations acquired on the same date over the same locations and uses them to calculate a set of agreement metrics. To maximize its portability, Java and open-source libraries were used in developing this system, and object-oriented programming (OOP) principles were followed to make it more flexible for future expansion. As a highly automated system designed to run as a stand-alone package or as a component of other Landsat data processing systems, this system can be used to assess the quality of essentially every Landsat surface reflectance image where spatially and temporally matching MODIS data are available. The effectiveness of this system was demonstrated using it to assess preliminary surface reflectance products derived using the Global Land Survey (GLS) Landsat images for the 2000 epoch. As surface reflectance likely will be a standard product for future Landsat missions, the approach developed in this study can be adapted as an operational quality assessment system for those missions
    corecore