1,149 research outputs found

    Structural matching by discrete relaxation

    Get PDF
    This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we locus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations ai the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graph-editing outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter

    A new straight line reconstruction methodology from multi-spectral stereo aerial images

    Get PDF
    In this study, a new methodology for the reconstruction of line features from multispectral stereo aerial images is presented. We take full advantage of the existing multispectral information in aerial images all over the steps of pre-processing and edge detection. To accurately describe the straight line segments, a principal component analysis technique is adapted. The line to line correspondences between the stereo images are established using a new pair-wise stereo matching approach. The approach involves new constraints, and the redundancy inherent in pair relations gives us a possibility to reduce the number of false matches in a probabilistic manner. The methodology is tested over three different urban test sites and provided good results for line matching and reconstruction

    A Relaxation Scheme for Mesh Locality in Computer Vision.

    Get PDF
    Parallel processing has been considered as the key to build computer systems of the future and has become a mainstream subject in Computer Science. Computer Vision applications are computationally intensive that require parallel approaches to exploit the intrinsic parallelism. This research addresses this problem for low-level and intermediate-level vision problems. The contributions of this dissertation are a unified scheme based on probabilistic relaxation labeling that captures localities of image data and the ability of using this scheme to develop efficient parallel algorithms for Computer Vision problems. We begin with investigating the problem of skeletonization. The technique of pattern match that exhausts all the possible interaction patterns between a pixel and its neighboring pixels captures the locality of this problem, and leads to an efficient One-pass Parallel Asymmetric Thinning Algorithm (OPATA\sb8). The use of 8-distance in this algorithm, or chessboard distance, not only improves the quality of the resulting skeletons, but also improves the efficiency of the computation. This new algorithm plays an important role in a hierarchical route planning system to extract high level typological information of cross-country mobility maps which greatly speeds up the route searching over large areas. We generalize the neighborhood interaction description method to include more complicated applications such as edge detection and image restoration. The proposed probabilistic relaxation labeling scheme exploit parallelism by discovering local interactions in neighboring areas and by describing them effectively. The proposed scheme consists of a transformation function and a dictionary construction method. The non-linear transformation function is derived from Markov Random Field theory. It efficiently combines evidences from neighborhood interactions. The dictionary construction method provides an efficient way to encode these localities. A case study applies the scheme to the problem of edge detection. The relaxation step of this edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial rule in refining edge outputs. The experiments on both synthetic and natural images show that our algorithm converges quickly, and is robust in noisy environment

    Computer vision

    Get PDF
    The field of computer vision is surveyed and assessed, key research issues are identified, and possibilities for a future vision system are discussed. The problems of descriptions of two and three dimensional worlds are discussed. The representation of such features as texture, edges, curves, and corners are detailed. Recognition methods are described in which cross correlation coefficients are maximized or numerical values for a set of features are measured. Object tracking is discussed in terms of the robust matching algorithms that must be devised. Stereo vision, camera control and calibration, and the hardware and systems architecture are discussed

    Computed tomography image analysis for the detection of obstructive lung diseases

    Get PDF
    Damage to the small airways resulting from direct lung injury or associated with many systemic disorders is not easy to identify. Non-invasive techniques such as chest radiography or conventional tests of lung function often cannot reveal the pathology. On Computed Tomography (CT) images, the signs suggesting the presence of obstructive airways disease are subtle, and inter- and intra-observer variability can be considerable. The goal of this research was to implement a system for the automated analysis of CT data of the lungs. Its function is to help clinicians establish a confident assessment of specific obstructive airways diseases and increase the precision of investigation of structure/function relationships. To help resolve the ambiguities of the CT scans, the main objectives of our system were to provide a functional description of the raster images, extract semi-quantitative measurements of the extent of obstructive airways disease and propose a clinical diagnosis aid using a priori knowledge of CT image features of the diseased lungs. The diagnostic process presented in this thesis involves the extraction and analysis of multiple findings. Several novel low-level computer vision feature extractors and image processing algorithms were developed for extracting the extent of the hypo-attenuated areas, textural characterisation of the lung parenchyma, and morphological description of the bronchi. The fusion of the results of these extractors was achieved with a probabilistic network combining a priori knowledge of lung pathology. Creating a CT lung phantom allowed for the initial validation of the proposed methods. Performance of the techniques was then assessed with clinical trials involving other diagnostic tests and expert chest radiologists. The results of the proposed system for diagnostic decision-support demonstrated the feasibility and importance of information fusion in medical image interpretation.Open acces
    • …
    corecore