2,999 research outputs found

    Automatic Main Road Extraction from High Resolution Satellite Imagery

    Get PDF
    Road information is essential for automatic GIS (geographical information system) data acquisition, transportation and urban planning. Automatic road (network) detection from high resolution satellite imagery will hold great potential for significant reduction of database development/updating cost and turnaround time. From so called low level feature detection to high level context supported grouping, so many algorithms and methodologies have been presented for this purpose. There is not any practical system that can fully automatically extract road network from space imagery for the purpose of automatic mapping. This paper presents the methodology of automatic main road detection from high resolution satellite IKONOS imagery. The strategies include multiresolution or image pyramid method, Gaussian blurring and the line finder using 1-dimemsional template correlation filter, line segment grouping and multi-layer result integration. Multi-layer or multi-resolution method for road extraction is a very effective strategy to save processing time and improve robustness. To realize the strategy, the original IKONOS image is compressed into different corresponding image resolution so that an image pyramid is generated; after that the line finder of 1-dimemsional template correlation filter after Gaussian blurring filtering is applied to detect the road centerline. Extracted centerline segments belong to or do not belong to roads. There are two ways to identify the attributes of the segments, the one is using segment grouping to form longer line segments and assign a possibility to the segment depending on the length and other geometric and photometric attribute of the segment, for example the longer segment means bigger possibility of being road. Perceptual-grouping based method is used for road segment linking by a possibility model that takes multi-information into account; here the clues existing in the gaps are considered. Another way to identify the segments is feature detection back-to-higher resolution layer from the image pyramid

    Digital Color Imaging

    Full text link
    This paper surveys current technology and research in the area of digital color imaging. In order to establish the background and lay down terminology, fundamental concepts of color perception and measurement are first presented us-ing vector-space notation and terminology. Present-day color recording and reproduction systems are reviewed along with the common mathematical models used for representing these devices. Algorithms for processing color images for display and communication are surveyed, and a forecast of research trends is attempted. An extensive bibliography is provided

    Image segmentation with adaptive region growing based on a polynomial surface model

    Get PDF
    A new method for segmenting intensity images into smooth surface segments is presented. The main idea is to divide the image into flat, planar, convex, concave, and saddle patches that coincide as well as possible with meaningful object features in the image. Therefore, we propose an adaptive region growing algorithm based on low-degree polynomial fitting. The algorithm uses a new adaptive thresholding technique with the L∞ fitting cost as a segmentation criterion. The polynomial degree and the fitting error are automatically adapted during the region growing process. The main contribution is that the algorithm detects outliers and edges, distinguishes between strong and smooth intensity transitions and finds surface segments that are bent in a certain way. As a result, the surface segments corresponding to meaningful object features and the contours separating the surface segments coincide with real-image object edges. Moreover, the curvature-based surface shape information facilitates many tasks in image analysis, such as object recognition performed on the polynomial representation. The polynomial representation provides good image approximation while preserving all the necessary details of the objects in the reconstructed images. The method outperforms existing techniques when segmenting images of objects with diffuse reflecting surfaces

    Geophysical remote sensing of North Carolina’s historic cultural landscapes: studies at house in the Horseshoe State historic site

    Get PDF
    This dissertation is written in accordance with the three article option offered by the Geography Department at UNC Greensboro. It contains three manuscripts to be submitted for publication. The articles address specific research issues within the remote sensing process described by Jensen (2016) as they apply to subsurface geophysical remote sensing of historic cultural landscapes, using the buried architectural features of House in the Horseshoe State Historic Site in Moore County, North Carolina. The first article compares instrument detection capabilities by examining subsurface structure remnants as they appear in single band ground-penetrating radar (GPR), magnetic gradiometer, magnetic susceptibility and conductivity images, and also demonstrates how excavation strengthens geophysical image interpretation. The second article examines the ability of GPR to estimate volumetric soil moisture (VSM) in order to improve the timing of data collection, and also examines the visible effect of variable moisture conditions on the interpretation of a large historic pit feature, while including the relative soil moisture continuum concepts common to geography/geomorphology into a discussion of GPR survey hydrologic conditions. The third article examines the roles of scientific visualization and cartography in the production of knowledge and the presentation of maps using geophysical data to depict historic landscapes. This study explores visualization techniques pertaining to the private data exploration view of the expert, and to the simplified public facing view

    BEMDEC: An Adaptive and Robust Methodology for Digital Image Feature Extraction

    Get PDF
    The intriguing study of feature extraction, and edge detection in particular, has, as a result of the increased use of imagery, drawn even more attention not just from the field of computer science but also from a variety of scientific fields. However, various challenges surrounding the formulation of feature extraction operator, particularly of edges, which is capable of satisfying the necessary properties of low probability of error (i.e., failure of marking true edges), accuracy, and consistent response to a single edge, continue to persist. Moreover, it should be pointed out that most of the work in the area of feature extraction has been focused on improving many of the existing approaches rather than devising or adopting new ones. In the image processing subfield, where the needs constantly change, we must equally change the way we think. In this digital world where the use of images, for variety of purposes, continues to increase, researchers, if they are serious about addressing the aforementioned limitations, must be able to think outside the box and step away from the usual in order to overcome these challenges. In this dissertation, we propose an adaptive and robust, yet simple, digital image features detection methodology using bidimensional empirical mode decomposition (BEMD), a sifting process that decomposes a signal into its two-dimensional (2D) bidimensional intrinsic mode functions (BIMFs). The method is further extended to detect corners and curves, and as such, dubbed as BEMDEC, indicating its ability to detect edges, corners and curves. In addition to the application of BEMD, a unique combination of a flexible envelope estimation algorithm, stopping criteria and boundary adjustment made the realization of this multi-feature detector possible. Further application of two morphological operators of binarization and thinning adds to the quality of the operator

    Linear Reconstruction of Non-Stationary Image Ensembles Incorporating Blur and Noise Models

    Get PDF
    Two new linear reconstruction techniques are developed to improve the resolution of images collected by ground-based telescopes imaging through atmospheric turbulence. The classical approach involves the application of constrained least squares (CLS) to the deconvolution from wavefront sensing (DWFS) technique. The new algorithm incorporates blur and noise models to select the appropriate regularization constant automatically. In all cases examined, the Newton-Raphson minimization converged to a solution in less than 10 iterations. The non-iterative Bayesian approach involves the development of a new vector Wiener filter which is optimal with respect to mean square error (MSE) for a non-stationary object class degraded by atmospheric turbulence and measurement noise. This research involves the first extension of the Wiener filter to account properly for shot noise and an unknown, random optical transfer function (OTF). The vector Wiener filter provides superior reconstructions when compared to the traditional scalar Wiener filter for a non-stationary object class. In addition, the new filter can provide a superresolution capability when the object\u27s Fourier domain statistics are known for spatial frequencies beyond the OTF cutoff. A generalized performance and robustness study of the vector Wiener filter showed that MSE performance is fundamentally limited by object signal-to-noise ratio (SNR) and correlation between object pixels

    Integrated Applications of Geo-Information in Environmental Monitoring

    Get PDF
    This book focuses on fundamental and applied research on geo-information technology, notably optical and radar remote sensing and algorithm improvements, and their applications in environmental monitoring. This Special Issue presents ten high-quality research papers covering up-to-date research in land cover change and desertification analyses, geo-disaster risk and damage evaluation, mining area restoration assessments, the improvement and development of algorithms, and coastal environmental monitoring and object targeting. The purpose of this Special Issue is to promote exchanges, communications and share the research outcomes of scientists worldwide and to bridge the gap between scientific research and its applications for advancing and improving society
    corecore