6 research outputs found

    Guidance in feature extraction to resolve uncertainty

    Get PDF
    Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the “complexity trap”. This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides

    AUTOMATIC IMAGE TO MODEL ALIGNMENT FOR PHOTO-REALISTIC URBAN MODEL RECONSTRUCTION

    Get PDF
    We introduce a hybrid approach in which images of an urban scene are automatically alignedwith a base geometry of the scene to determine model-relative external camera parameters. Thealgorithm takes as input a model of the scene and images with approximate external cameraparameters and aligns the images to the model by extracting the facades from the images andaligning the facades with the model by minimizing over a multivariate objective function. Theresulting image-pose pairs can be used to render photo-realistic views of the model via texturemapping.Several natural extensions to the base hybrid reconstruction technique are also introduced. Theseextensions, which include vanishing point based calibration refinement and video stream basedreconstruction, increase the accuracy of the base algorithm, reduce the amount of data that mustbe provided by the user as input to the algorithm, and provide a mechanism for automaticallycalibrating a large set of images for post processing steps such as automatic model enhancementand fly-through model visualization.Traditionally, photo-realistic urban reconstruction has been approached from purely image-basedor model-based approaches. Recently, research has been conducted on hybrid approaches, whichcombine the use of images and models. Such approaches typically require user assistance forcamera calibration. Our approach is an improvement over these methods because it does notrequire user assistance for camera calibration

    Processamennto de imagens digitais : uma abordagem utilizando conjuntos difusos

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro TecnologicoNas últimas decadas, as técnicas de Sensoriamento Remoto evoluiram consideravelmente. Técnicas especiais são aplicadas para processar e interpretar imagens obtidas por sensoriamento remoto, com o propósito de produzir mapas convencionais, mapas temáticos etc. Entre essas técnicas, a classificação de imagens digitais é considerada uma das principais. A classificação, através de técnicas convencionais, não diferencia noções de intensidade da pertinência de um pixel em uma dada classe, mas somente como pertencente ou não a uma classe. Informações geográficas, porém, não são precisas. Mais de uma classe pode estar presente numa determinada área do terreno. Numa representação através de conjuntos difusos, classes de uso/cobertura do solo podem ser definidas como conjuntos difusos cujos elementos são os pixels. A introdução da teoria dos conjuntos difusos permite, no processo de classificação e análise de imagens digitais, identificar pixels que são bem representativos de cada classe e pixels mistos ou intermediários. Este trabalho apresenta uma abordagem baseada na teoria dos conjuntos difusos para o processamento de imagens digitais obtidas por sensoriamento remoto. Um classificador e técnicas de tratamento e manipulação das informações obtidas são propostas. A abordagem proposta foi implementada num sistema computacional que incorpora, também, funções normalmente encontradas em outros sistemas existentes. Aplicações práticas são desenvolvidas, com a finalidade de mostrar algumas das muitas informações que podem ser obtidas e manuseadas

    Multiresolution neural networks for image edge detection and restoration

    Get PDF
    One of the methods for building an automatic visual system is to borrow the properties of the human visual system (HVS). Artificial neural networks are based on this doctrine and they have been applied to image processing and computer vision. This work focused on the plausibility of using a class of Hopfield neural networks for edge detection and image restoration. To this end, a quadratic energy minimization framework is presented. Central to this framework are relaxation operations, which can be implemented using the class of Hopfield neural networks. The role of the uncertainty principle in vision is described, which imposes a limit on the simultaneous localisation in both class and position space. It is shown how a multiresolution approach allows the trade off between position and class resolution and ensures both robustness in noise and efficiency of computation. As edge detection and image restoration are ill-posed, some a priori knowledge is needed to regularize these problems. A multiresolution network is proposed to tackle the uncertainty problem and the regularization of these ill-posed image processing problems. For edge detection, orientation information is used to construct a compatibility function for the strength of the links of the proposed Hopfield neural network. Edge detection 'results are presented for a number of synthetic and natural images which show that the iterative network gives robust results at low signal-to-noise ratios (0 dB) and is at least as good as many previous methods at capturing complex region shapes. For restoration, mean square error is used as the quadratic energy function of the Hopfield neural network. The results of the edge detection are used for adaptive restoration. Also shown are the results of restoration using the proposed iterative network framework

    Recognizing 3-D Objects Using 2-D Images

    Get PDF
    We discuss a strategy for visual recognition by forming groups of salient image features, and then using these groups to index into a data base to find all of the matching groups of model features. We discuss the most space efficient possible method of representing 3-D models for indexing from 2-D data, and show how to account for sensing error when indexing. We also present a convex grouping method that is robust and efficient, both theoretically and in practice. Finally, we combine these modules into a complete recognition system, and test its performance on many real images
    corecore