2,340 research outputs found

    Rotational Projection Statistics for 3D Local Surface Description and Object Recognition

    Get PDF
    Recognizing 3D objects in the presence of noise, varying mesh resolution, occlusion and clutter is a very challenging task. This paper presents a novel method named Rotational Projection Statistics (RoPS). It has three major modules: Local Reference Frame (LRF) definition, RoPS feature description and 3D object recognition. We propose a novel technique to define the LRF by calculating the scatter matrix of all points lying on the local surface. RoPS feature descriptors are obtained by rotationally projecting the neighboring points of a feature point onto 2D planes and calculating a set of statistics (including low-order central moments and entropy) of the distribution of these projected points. Using the proposed LRF and RoPS descriptor, we present a hierarchical 3D object recognition algorithm. The performance of the proposed LRF, RoPS descriptor and object recognition algorithm was rigorously tested on a number of popular and publicly available datasets. Our proposed techniques exhibited superior performance compared to existing techniques. We also showed that our method is robust with respect to noise and varying mesh resolution. Our RoPS based algorithm achieved recognition rates of 100%, 98.9%, 95.4% and 96.0% respectively when tested on the Bologna, UWA, Queen's and Ca' Foscari Venezia Datasets.Comment: The final publication is available at link.springer.com International Journal of Computer Vision 201

    Feature extraction for range image interpretation using local topology statistics

    Get PDF
    This thesis presents an approach for interpreting range images of known subject matter, such as the human face, based on the extraction and matching of local features from the images. In recent years, approaches to interpret two-dimensional (2D) images based on local feature extraction have advanced greatly, for example, systems such as Scale Invariant Feature Transform (SIFT) can detect and describe the local features in the 2D images effectively. With the aid of rapidly advancing three-dimensional (3D) imaging technology, in particular, the advent of commercially available surface scanning systems based on photogrammetry, image representation has been able to extend into the third dimension. Moreover, range images confer a number of advantages over conventional 2D images, for instance, the properties of being invariant to lighting, pose and viewpoint changes. As a result, an attempt has been made in this work to establish how best to represent the local range surface with a feature descriptor, thereby developing a matching system that takes advantages of the third dimension present in the range images and casting this in the framework of an existing scale and rotational invariance recognition technology: SIFT. By exploring the statistical representations of the local variation, it is possible to represent and match range images of human faces. This can be achieved by extracting unique mathematical keys known as feature descriptors, from the various automatically generated stable keypoint locations of the range images, thereby capturing the local information of the distributions of the mixes of surface types and their orientations simultaneously. Keypoints are generated through scale-space approach, where the (x,y) location and the appropriate scale (sigma) are detected. In order to achieve invariance to in-plane viewpoint rotational changes, a consistent canonical orientation is assigned to each keypoint and the sampling patch is rotated to this canonical orientation. The mixes of surface types, derived using the shape index, and the image gradient orientations are extracted from each sampling patch by placing nine overlapping Gaussian sub-regions over the measurement aperture. Each of the nine regions is overlapped by one standard deviation in order to minimise the occurrence of spatial aliasing during the sampling stages and to provide a better continuity within the descriptor. Moreover, surface normals can be computed from each of the keypoint location, allowing the local 3D pose to be estimated and corrected within the feature descriptors since the orientations in which the images were captured are unknown a priori. As a result, the formulated feature descriptors have strong discriminative power and are stable to rotational changes

    Towards Robust Visual Localization in Challenging Conditions

    Get PDF
    Visual localization is a fundamental problem in computer vision, with a multitude of applications in robotics, augmented reality and structure-from-motion. The basic problem is to, based on one or more images, figure out the position and orientation of the camera which captured these images relative to some model of the environment. Current visual localization approaches typically work well when the images to be localized are captured under similar conditions compared to those captured during mapping. However, when the environment exhibits large changes in visual appearance, due to e.g. variations in weather, seasons, day-night or viewpoint, the traditional pipelines break down. The reason is that the local image features used are based on low-level pixel-intensity information, which is not invariant to these transformations: when the environment changes, this will cause a different set of keypoints to be detected, and their descriptors will be different, making the long-term visual localization problem a challenging one. In this thesis, five papers are included, which present work towards solving the problem of long-term visual localization. Two of the articles present ideas for how semantic information may be included to aid in the localization process: one approach relies only on the semantic information for visual localization, and the other shows how the semantics can be used to detect outlier feature correspondences. The third paper considers how the output from a monocular depth-estimation network can be utilized to extract features that are less sensitive to viewpoint changes. The fourth article is a benchmark paper, where we present three new benchmark datasets aimed at evaluating localization algorithms in the context of long-term visual localization. Lastly, the fifth article considers how to perform convolutions on spherical imagery, which in the future might be applied to learning local image features for the localization problem
    • …
    corecore