14 research outputs found
2-D edge feature extraction to subpixel accuracy using the generalized energy approach
Precision edge feature extraction is a very important step in vision, Researchers mainly use step edges to model an edge at subpixel level. In this paper we describe a new technique for two dimensional edge feature extraction to subpixel accuracy using a general edge model. Using six basic edge types to model edges, the edge parameters at subpixel level are extracted by fitting a model to the image signal using least-.squared error fitting technique.<br /
Edge Detection with Sub-pixel Accuracy Based on Approximation of Edge with Erf Function
Edge detection is an often used procedure in digital image processing. For some practical applications is desirable to detect edges with sub-pixel accuracy. In this paper we present edge detection method for 1-D images based on approximation of real image function with Erf function. This method is verified by simulations and experiments for various numbers of samples of simulated and real images. Results of simulations and experiments are also used to compare proposed edge detection scheme with two often used moment-based edge detectors with sub-pixel precision
Modeling edges at subpixel accuracy using the local energy approach
In this paper we described new technique for 1-D and 2-D edge feature extraction to subpixel accuracy using edge models and the local energy approach. A candidate edge is modeled as one of a number of parametric edge models, and the fit is refined by a least-squared error fitting technique
Annihilation-driven Localised Image Edge Models
We propose a novel edge detection algorithm with sub-pixel accuracy based on annihilation of signals with finite rate of innovation. We show that the Fourier domain annihilation equations can be interpreted as spatial domain multiplications. From this new perspective, we obtain an accurate estimation of the edge model by assuming a simple parametric form within each localised block. Further, we build a locally adaptive global mask function (i.e, our edge model) for the whole image. The mask function is then used as an edge- preserving constraint in further processing. Numerical experiments on both edge localisations and image up-sampling show the effectiveness of the proposed approach, which out- performs state-of-the-art method
Error analysis and planning accuracy for dimensional measurement in active vision inspection
This paper discusses the effect of spatial quantization errors and displacement errors on the precision dimensional measurements for an edge segment. Probabilistic analysis in terms of the resolution of the image is developed for 2D quantization errors. Expressions for the mean and variance of these errors are developed. The probability density function of the quantization error is derived. The position and orientation errors of the active head are assumed to be normally distributed. A probabilistic analysis in terms of these errors is developed for the displacement errors. Through integrating the spatial quantization errors and the displacement errors, we can compute the total error in the active vision inspection system. Based on the developed analysis, we investigate whether a given set of sensor setting parameters in an active system is suitable to obtain a desired accuracy for specific dimensional measurements, and one can determine sensor positions and view directions which meet the necessary tolerance and accuracy of inspection.published_or_final_versio
The computer vision precision. Limit precision of edge localisation
The key question of computer vision inspection is the edge localization
precision in digital images. The method of image primitive localization
with subpixel precision is essentially the interpolation in the significant
zones .
The sensitivity of an image acquisition system is limited by the quantification
error, even an input signal is correctly sampled according ta Shannon
theorem, there are certain input changes that we could not detect . This
makes a « black zone» and il is difficult to prove a measure precision
higher than the dimension of the « black zone » .This paper analyses the limite precision of edge localization and dimensional
measure in digital grey images according to dynamic luminance
digitization interval and the system impulse response function with the help
of « black zone » .
The noise influence on « black zone » is also studied.Le problème clé de l'inspection assistée par la vision artificielle est la
précision de localisation des contours dans l'image numérique délivrée
par le système d'acquisition . La méthode permettant d'atteindre une
précision supérieure à la dimension inter-pixel est l'interpolation des
zones d'intérêt .
Même si les hypothèses du théorème de Shannon sont respectées, la
détection du déplacement d'un contour et/ou une mesure dimensionnelle
précise peuvent ne pas être possibles sur les échantillons disponibles à
cause de la quantification de luminance . Celle-ci crée une « zone noire »
(ZN) et il est alors difficile de justifier des précisions de mesure qui
seraient inférieures à la dimension de cette ZN .Cet article analyse la limite de précision pour la localisation d'un contour
et pour les mesures dimensionnelles sur des images en niveaux de gris en
fonction de l'intervalle dynamique de numérisation de luminance et de la
réponse impulsionnelle du système à l'aide du concept de la zone noire,
ZN.
L'influence du bruit sur la ZN est également étudiée
Feature Extraction for image super-resolution using finite rate of innovation principles
To understand a real-world scene from several multiview pictures, it is necessary to find
the disparities existing between each pair of images so that they are correctly related to one
another. This process, called image registration, requires the extraction of some specific
information about the scene. This is achieved by taking features out of the acquired
images. Thus, the quality of the registration depends largely on the accuracy of the
extracted features.
Feature extraction can be formulated as a sampling problem for which perfect re-
construction of the desired features is wanted. The recent sampling theory for signals with
finite rate of innovation (FRI) and the B-spline theory offer an appropriate new frame-
work for the extraction of features in real images. This thesis first focuses on extending the
sampling theory for FRI signals to a multichannel case and then presents exact sampling
results for two different types of image features used for registration: moments and edges.
In the first part, it is shown that the geometric moments of an observed scene can
be retrieved exactly from sampled images and used as global features for registration. The
second part describes how edges can also be retrieved perfectly from sampled images for
registration purposes. The proposed feature extraction schemes therefore allow in theory
the exact registration of images. Indeed, various simulations show that the proposed
extraction/registration methods overcome traditional ones, especially at low-resolution.
These characteristics make such feature extraction techniques very appropriate for
applications like image super-resolution for which a very precise registration is needed. The
quality of the super-resolved images obtained using the proposed feature extraction meth-
ods is improved by comparison with other approaches. Finally, the notion of polyphase
components is used to adapt the image acquisition model to the characteristics of real
digital cameras in order to run super-resolution experiments on real images
Recommended from our members
Machine vision techniques for inspection of dry-fibre composite preforms in the aerospace industry
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.This thesis presents the results of a three year investigation into machine vision techniques for in-process automated inspection of dry-fibre composite preforms. Efficient texture analysis based techniques have been developed, tested, and implemented in a prototype robotic assembly cell. Industrial constraints have been considered in the development of all the algorithms described. A single channel texture analysis model is described which can successfully segment images containing only a few textures. The model is based on convolution of the image with small kernels optimised for the task, and is elegant in the sense that it is computationally simple and easily
realisable in low cost hardware. A new convolution kernel optimisation algorithm is described. It is demonstrated that convolution kernels can also be optimised to perform as edge operators in simple textured images. A novel boundary refinement algorithm is described which reduces the inspection errors inherent in texture based boundary estimates. The algorithm takes the
form of a local search, using the texture estimate as a guiding template, and
selects edge points by maximising a merit function. Optimum parameters for the merit function are obtained using multiple training images in conjunction with simple function optimisation algorithms.This study is funded by the Engineering and Physical Sciences Research Council (EPSRC) and Dowty Aerospace Propellers Ltd