12,626 research outputs found

    A new Edge Detector Based on Parametric Surface Model: Regression Surface Descriptor

    Full text link
    In this paper we present a new methodology for edge detection in digital images. The first originality of the proposed method is to consider image content as a parametric surface. Then, an original parametric local model of this surface representing image content is proposed. The few parameters involved in the proposed model are shown to be very sensitive to discontinuities in surface which correspond to edges in image content. This naturally leads to the design of an efficient edge detector. Moreover, a thorough analysis of the proposed model also allows us to explain how these parameters can be used to obtain edge descriptors such as orientations and curvatures. In practice, the proposed methodology offers two main advantages. First, it has high customization possibilities in order to be adjusted to a wide range of different problems, from coarse to fine scale edge detection. Second, it is very robust to blurring process and additive noise. Numerical results are presented to emphasis these properties and to confirm efficiency of the proposed method through a comparative study with other edge detectors.Comment: 21 pages, 13 figures and 2 table

    Robust Feature Detection and Local Classification for Surfaces Based on Moment Analysis

    Get PDF
    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.

    Image Feature Information Extraction for Interest Point Detection: A Comprehensive Review

    Full text link
    Interest point detection is one of the most fundamental and critical problems in computer vision and image processing. In this paper, we carry out a comprehensive review on image feature information (IFI) extraction techniques for interest point detection. To systematically introduce how the existing interest point detection methods extract IFI from an input image, we propose a taxonomy of the IFI extraction techniques for interest point detection. According to this taxonomy, we discuss different types of IFI extraction techniques for interest point detection. Furthermore, we identify the main unresolved issues related to the existing IFI extraction techniques for interest point detection and any interest point detection methods that have not been discussed before. The existing popular datasets and evaluation standards are provided and the performances for eighteen state-of-the-art approaches are evaluated and discussed. Moreover, future research directions on IFI extraction techniques for interest point detection are elaborated

    From 3D Point Clouds to Pose-Normalised Depth Maps

    Get PDF
    We consider the problem of generating either pairwise-aligned or pose-normalised depth maps from noisy 3D point clouds in a relatively unrestricted poses. Our system is deployed in a 3D face alignment application and consists of the following four stages: (i) data filtering, (ii) nose tip identification and sub-vertex localisation, (iii) computation of the (relative) face orientation, (iv) generation of either a pose aligned or a pose normalised depth map. We generate an implicit radial basis function (RBF) model of the facial surface and this is employed within all four stages of the process. For example, in stage (ii), construction of novel invariant features is based on sampling this RBF over a set of concentric spheres to give a spherically-sampled RBF (SSR) shape histogram. In stage (iii), a second novel descriptor, called an isoradius contour curvature signal, is defined, which allows rotational alignment to be determined using a simple process of 1D correlation. We test our system on both the University of York (UoY) 3D face dataset and the Face Recognition Grand Challenge (FRGC) 3D data. For the more challenging UoY data, our SSR descriptors significantly outperform three variants of spin images, successfully identifying nose vertices at a rate of 99.6%. Nose localisation performance on the higher quality FRGC data, which has only small pose variations, is 99.9%. Our best system successfully normalises the pose of 3D faces at rates of 99.1% (UoY data) and 99.6% (FRGC data)

    Direct Observation of Cosmic Strings via their Strong Gravitational Lensing Effect: II. Results from the HST/ACS Image Archive

    Full text link
    We have searched 4.5 square degrees of archival HST/ACS images for cosmic strings, identifying close pairs of similar, faint galaxies and selecting groups whose alignment is consistent with gravitational lensing by a long, straight string. We find no evidence for cosmic strings in five large-area HST treasury surveys (covering a total of 2.22 square degrees), or in any of 346 multi-filter guest observer images (1.18 square degrees). Assuming that simulations ccurately predict the number of cosmic strings in the universe, this non-detection allows us to place upper limits on the unitless Universal cosmic string tension of G mu/c^2 < 2.3 x 10^-6, and cosmic string density of Omega_s < 2.1 x 10^-5 at the 95% confidence level (marginalising over the other parameter in each case). We find four dubious cosmic string candidates in 318 single filter guest observer images (1.08 square degrees), which we are unable to conclusively eliminate with existing data. The confirmation of any one of these candidates as cosmic strings would imply G mu/c^2 ~ 10^-6 and Omega_s ~ 10^-5. However, we estimate that there is at least a 92% chance that these string candidates are random alignments of galaxies. If we assume that these candidates are indeed false detections, our final limits on G mu/c^2 and Omega_s fall to 6.5 x 10^-7 and 7.3 x 10^-6. Due to the extensive sky coverage of the HST/ACS image archive, the above limits are universal. They are quite sensitive to the number of fields being searched, and could be further reduced by more than a factor of two using forthcoming HST data.Comment: 21 pages, 18 figure

    Geometric and photometric affine invariant image registration

    Get PDF
    This thesis aims to present a solution to the correspondence problem for the registration of wide-baseline images taken from uncalibrated cameras. We propose an affine invariant descriptor that combines the geometry and photometry of the scene to find correspondences between both views. The geometric affine invariant component of the descriptor is based on the affine arc-length metric, whereas the photometry is analysed by invariant colour moments. A graph structure represents the spatial distribution of the primitive features; i.e. nodes correspond to detected high-curvature points, whereas arcs represent connectivities by extracted contours. After matching, we refine the search for correspondences by using a maximum likelihood robust algorithm. We have evaluated the system over synthetic and real data. The method is endemic to propagation of errors introduced by approximations in the system.BAE SystemsSelex Sensors and Airborne System

    A generalised framework for saliency-based point feature detection

    Get PDF
    Here we present a novel, histogram-based salient point feature detector that may naturally be applied to both images and 3D data. Existing point feature detectors are often modality specific, with 2D and 3D feature detectors typically constructed in separate ways. As such, their applicability in a 2D-3D context is very limited, particularly where the 3D data is obtained by a LiDAR scanner. By contrast, our histogram-based approach is highly generalisable and as such, may be meaningfully applied between 2D and 3D data. Using the generalised approach, we propose salient point detectors for images, and both untextured and textured 3D data. The approach naturally allows for the detection of salient 3D points based jointly on both the geometry and texture of the scene, allowing for broader applicability. The repeatability of the feature detectors is evaluated using a range of datasets including image and LiDAR input from indoor and outdoor scenes. Experimental results demonstrate a significant improvement in terms of 2D-2D and 2D-3D repeatability compared to existing multi-modal feature detectors

    A Multi-scale Bilateral Structure Tensor Based Corner Detector

    Full text link
    9th Asian Conference on Computer Vision, ACCV 2009, Xi'an, 23-27 September 2009In this paper, a novel multi-scale nonlinear structure tensor based corner detection algorithm is proposed to improve effectively the classical Harris corner detector. By considering both the spatial and gradient distances of neighboring pixels, a nonlinear bilateral structure tensor is constructed to examine the image local pattern. It can be seen that the linear structure tensor used in the original Harris corner detector is a special case of the proposed bilateral one by considering only the spatial distance. Moreover, a multi-scale filtering scheme is developed to tell the trivial structures from true corners based on their different characteristics in multiple scales. The comparison between the proposed approach and four representative and state-of-the-art corner detectors shows that our method has much better performance in terms of both detection rate and localization accuracy.Department of ComputingRefereed conference pape

    A generalised framework for saliency-based point feature detection

    Get PDF
    Here we present a novel, histogram-based salient point feature detector that may naturally be applied to both images and 3D data. Existing point feature detectors are often modality specific, with 2D and 3D feature detectors typically constructed in separate ways. As such, their applicability in a 2D-3D context is very limited, particularly where the 3D data is obtained by a LiDAR scanner. By contrast, our histogram-based approach is highly generalisable and as such, may be meaningfully applied between 2D and 3D data. Using the generalised approach, we propose salient point detectors for images, and both untextured and textured 3D data. The approach naturally allows for the detection of salient 3D points based jointly on both the geometry and texture of the scene, allowing for broader applicability. The repeatability of the feature detectors is evaluated using a range of datasets including image and LiDAR input from indoor and outdoor scenes. Experimental results demonstrate a significant improvement in terms of 2D-2D and 2D-3D repeatability compared to existing multi-modal feature detectors
    corecore