14 research outputs found

    Edge Detection Based on Modified BP Algorithm of ANN

    Full text link

    2-D edge feature extraction to subpixel accuracy using the generalized energy approach

    Full text link
    Precision edge feature extraction is a very important step in vision, Researchers mainly use step edges to model an edge at subpixel level. In this paper we describe a new technique for two dimensional edge feature extraction to subpixel accuracy using a general edge model. Using six basic edge types to model edges, the edge parameters at subpixel level are extracted by fitting a model to the image signal using least-.squared error fitting technique.<br /

    Edge Detection with Sub-pixel Accuracy Based on Approximation of Edge with Erf Function

    Get PDF
    Edge detection is an often used procedure in digital image processing. For some practical applications is desirable to detect edges with sub-pixel accuracy. In this paper we present edge detection method for 1-D images based on approximation of real image function with Erf function. This method is verified by simulations and experiments for various numbers of samples of simulated and real images. Results of simulations and experiments are also used to compare proposed edge detection scheme with two often used moment-based edge detectors with sub-pixel precision

    Modeling edges at subpixel accuracy using the local energy approach

    Full text link
    In this paper we described new technique for 1-D and 2-D edge feature extraction to subpixel accuracy using edge models and the local energy approach. A candidate edge is modeled as one of a number of parametric edge models, and the fit is refined by a least-squared error fitting technique

    Annihilation-driven Localised Image Edge Models

    Get PDF
    We propose a novel edge detection algorithm with sub-pixel accuracy based on annihilation of signals with finite rate of innovation. We show that the Fourier domain annihilation equations can be interpreted as spatial domain multiplications. From this new perspective, we obtain an accurate estimation of the edge model by assuming a simple parametric form within each localised block. Further, we build a locally adaptive global mask function (i.e, our edge model) for the whole image. The mask function is then used as an edge- preserving constraint in further processing. Numerical experiments on both edge localisations and image up-sampling show the effectiveness of the proposed approach, which out- performs state-of-the-art method

    Error analysis and planning accuracy for dimensional measurement in active vision inspection

    Get PDF
    This paper discusses the effect of spatial quantization errors and displacement errors on the precision dimensional measurements for an edge segment. Probabilistic analysis in terms of the resolution of the image is developed for 2D quantization errors. Expressions for the mean and variance of these errors are developed. The probability density function of the quantization error is derived. The position and orientation errors of the active head are assumed to be normally distributed. A probabilistic analysis in terms of these errors is developed for the displacement errors. Through integrating the spatial quantization errors and the displacement errors, we can compute the total error in the active vision inspection system. Based on the developed analysis, we investigate whether a given set of sensor setting parameters in an active system is suitable to obtain a desired accuracy for specific dimensional measurements, and one can determine sensor positions and view directions which meet the necessary tolerance and accuracy of inspection.published_or_final_versio

    The computer vision precision. Limit precision of edge localisation

    Get PDF
    The key question of computer vision inspection is the edge localization precision in digital images. The method of image primitive localization with subpixel precision is essentially the interpolation in the significant zones . The sensitivity of an image acquisition system is limited by the quantification error, even an input signal is correctly sampled according ta Shannon theorem, there are certain input changes that we could not detect . This makes a « black zone» and il is difficult to prove a measure precision higher than the dimension of the « black zone » .This paper analyses the limite precision of edge localization and dimensional measure in digital grey images according to dynamic luminance digitization interval and the system impulse response function with the help of « black zone » . The noise influence on « black zone » is also studied.Le problème clé de l'inspection assistée par la vision artificielle est la précision de localisation des contours dans l'image numérique délivrée par le système d'acquisition . La méthode permettant d'atteindre une précision supérieure à la dimension inter-pixel est l'interpolation des zones d'intérêt . Même si les hypothèses du théorème de Shannon sont respectées, la détection du déplacement d'un contour et/ou une mesure dimensionnelle précise peuvent ne pas être possibles sur les échantillons disponibles à cause de la quantification de luminance . Celle-ci crée une « zone noire » (ZN) et il est alors difficile de justifier des précisions de mesure qui seraient inférieures à la dimension de cette ZN .Cet article analyse la limite de précision pour la localisation d'un contour et pour les mesures dimensionnelles sur des images en niveaux de gris en fonction de l'intervalle dynamique de numérisation de luminance et de la réponse impulsionnelle du système à l'aide du concept de la zone noire, ZN. L'influence du bruit sur la ZN est également étudiée

    Feature Extraction for image super-resolution using finite rate of innovation principles

    No full text
    To understand a real-world scene from several multiview pictures, it is necessary to find the disparities existing between each pair of images so that they are correctly related to one another. This process, called image registration, requires the extraction of some specific information about the scene. This is achieved by taking features out of the acquired images. Thus, the quality of the registration depends largely on the accuracy of the extracted features. Feature extraction can be formulated as a sampling problem for which perfect re- construction of the desired features is wanted. The recent sampling theory for signals with finite rate of innovation (FRI) and the B-spline theory offer an appropriate new frame- work for the extraction of features in real images. This thesis first focuses on extending the sampling theory for FRI signals to a multichannel case and then presents exact sampling results for two different types of image features used for registration: moments and edges. In the first part, it is shown that the geometric moments of an observed scene can be retrieved exactly from sampled images and used as global features for registration. The second part describes how edges can also be retrieved perfectly from sampled images for registration purposes. The proposed feature extraction schemes therefore allow in theory the exact registration of images. Indeed, various simulations show that the proposed extraction/registration methods overcome traditional ones, especially at low-resolution. These characteristics make such feature extraction techniques very appropriate for applications like image super-resolution for which a very precise registration is needed. The quality of the super-resolved images obtained using the proposed feature extraction meth- ods is improved by comparison with other approaches. Finally, the notion of polyphase components is used to adapt the image acquisition model to the characteristics of real digital cameras in order to run super-resolution experiments on real images
    corecore