9 research outputs found

    A regularized solution to edge detection

    Get PDF
    AbstractWe assume that edge detection is the task of measuring and localizing changes of light intensity in the image. As discussed by V. Torre and T. Poggio (1984), “On Edge Detection,” AI Memo 768, MIT AI Lab), edge detection, when defined in this way, is λ problem of numerical differentiation, which is ill posed. This paper shows that simple regularization methods lead to filtering the image prior to an appropriate differentiation operation. In particular, we prove (1) that the variational formulation of Tikhonov regularization leads to λ convolution filter, (2) that the form of this filter is similar to the Gaussian filter, and (3) that the regularizing parameter λ in the variational principle effectively controls the scale of the filter

    A bilateral schema for interval-valued image differentiation

    Get PDF
    Differentiation of interval-valued functions is an intricate problem, since it cannot be defined as a direct generalization of differentiation of scalar ones. Literature on interval arithmetic contains proposals and definitions for differentiation, but their semantic is unclear for the cases in which intervals represent the ambiguity due to hesitancy or lack of knowledge. In this work we analyze the needs, tools and goals for interval-valued differentiation, focusing on the case of interval-valued images. This leads to the formulation of a differentiation schema inspired by bilateral filters, which allows for the accommodation of most of the methods for scalar image differentiation, but also takes support from interval-valued arithmetic. This schema can produce area-, segment-and vector-valued gradients, according to the needs of the image processing task it is applied to. Our developments are put to the test in the context of edge detection

    Defining the 3D geometry of thin shale units in the Sleipner reservoir using seismic attributes

    Get PDF
    Acknowledgments The seismic interpretation and image processing was carried out in the SeisLab facility at the University of Aberdeen (sponsored by BG BP and Chevron). Seismic imaging analysis was performed using GeoTeric (ffA), and analysis of seismic amplitudes was performed in Petrel 2015 (Schlumberger). We would like to thank the NDDC (RG11766-10) for funding this research and Statoil for the release of the Sleipner field seismic dataset utilized in this research paper and also Anne-Kari Furre and her colleagues for their assistance. We also thank the editor, Alejandro Escalona and the two anonymous reviewers for their constructive and in depth comments that improved the paper.Peer reviewedPostprin

    A Stochastic Modeling Approach to Region-and Edge-Based Image Segmentation

    Get PDF
    The purpose of image segmentation is to isolate objects in a scene from the background. This is a very important step in any computer vision system since various tasks, such as shape analysis and object recognition, require accurate image segmentation. Image segmentation can also produce tremendous data reduction. Edge-based and region-based segmentation have been examined and two new algorithms based on recent results in random field theory have been developed. The edge-based segmentation algorithm uses the pixel gray level intensity information to allocate object boundaries in two stages: edge enhancement, followed by edge linking. Edge enhancement is accomplished by maximum energy filters used in one-dimensional bandlimited signal analysis. The issue of optimum filter spatial support is analyzed for ideal edge models. Edge linking is performed by quantitative sequential search using the Stack algorithm. Two probabilistic search metrics are introduced and their optimality is proven and demonstrated on test as well as real scenes. Compared to other methods, this algorithm is shown to produce more accurate allocation of object boundaries. Region-based segmentation was modeled as a MAP estimation problem in which the actual (unknown) objects were estimated from the observed (known) image by a recursive classification algorithms. The observed image was modeled by an Autoregressive (AR) model whose parameters were estimated locally, and a Gibbs-Markov random field (GMRF) model was used to model the unknown scene. A computational study was conducted on images having various types of texture images. The issues of parameter estimation, neighborhood selection, and model orders were examined. It is concluded that the MAP approach for region segmentation generally works well on images having a large content of microtextures which can be properly modeled by both AR and GMRF models. On these texture images, second order AR and GMRF models were shown to be adequate

    Invariant surface characteristics for 3D object recognition in range images

    Full text link
    In recent years there has been a tremendous increase in computer vision research using range images (or depth maps) as sensor input data. The most attractive feature of range images is the explicitness of the surface information. Many industrial and navigational robotic tasks will be more easily accomplished if such explicit depth information can be efficiently obtained and interpreted. Intensity image understanding research has shown that the early processing of sensor data should be data-driven. The goal of early processing is to generate a rich description for later processing. Classical differential geometry provides a complete local description of smooth surfaces. The first and second fundamental forms of surfaces provide a set of differential-geometric shape descriptors that capture domain-independent surface information. Mean curvature and Gaussian curvature are the fundamental second-order surface characteristics that possess desirable invariance properties and represent extrinsic and intrinsic surface geometry respectively. The signs of these surface curvatures are used to classify range image regions into one of eight basic viewpoint-independent surface types. Experimental results for real and synthetic range images show the properties, usefulness, and importance of differential-geometric surface characteristics.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/26326/1/0000413.pd

    Estimation of edges in magnetic resonance images

    Get PDF

    Displacement and disparity representations in early vision

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1992.Includes bibliographical references (p. 211-220).by Steven James White.Ph.D

    Edge detection using neural network arbitration

    Get PDF
    A human observer is able to recognise and describe most parts of an object by its contour, if this is properly traced and reflects the shape of the object itself. With a machine vision system this recognition task has been approached using a similar technique. This prompted the development of many diverse edge detection algorithms. The work described in this thesis is based on the visual observation that edge maps produced by different algorithms, as the image degrades. Display different properties of the original image. Our proposed objective is to try and improve the edge map through the arbitration between edge maps produced by diverse (in nature, approach and performance) edge detection algorithms. As image processing tools are repetitively applied to similar images we believe the objective can be achieved by a learning process based on sample images. It is shown that such an approach is feasible, using an artificial neural network to perform the arbitration. This is taught from sets extracted from sample images. The arbitration system is implemented upon a parallel processing platform. The performance of the system is presented through examples of diverse types of image. Comparisons with a neural network edge detector (also developed within this thesis) and conventional edge detectors show that the proposed system presents significant advantages

    Edge detection using neural network arbitration

    Get PDF
    A human observer is able to recognise and describe most parts of an object by its contour, if this is properly traced and reflects the shape of the object itself. With a machine vision system this recognition task has been approached using a similar technique. This prompted the development of many diverse edge detection algorithms. The work described in this thesis is based on the visual observation that edge maps produced by different algorithms, as the image degrades. Display different properties of the original image. Our proposed objective is to try and improve the edge map through the arbitration between edge maps produced by diverse (in nature, approach and performance) edge detection algorithms. As image processing tools are repetitively applied to similar images we believe the objective can be achieved by a learning process based on sample images. It is shown that such an approach is feasible, using an artificial neural network to perform the arbitration. This is taught from sets extracted from sample images. The arbitration system is implemented upon a parallel processing platform. The performance of the system is presented through examples of diverse types of image. Comparisons with a neural network edge detector (also developed within this thesis) and conventional edge detectors show that the proposed system presents significant advantages
    corecore