226,339 research outputs found

    Feature Level Fusion of Face and Fingerprint Biometrics

    Full text link
    The aim of this paper is to study the fusion at feature extraction level for face and fingerprint biometrics. The proposed approach is based on the fusion of the two traits by extracting independent feature pointsets from the two modalities, and making the two pointsets compatible for concatenation. Moreover, to handle the problem of curse of dimensionality, the feature pointsets are properly reduced in dimension. Different feature reduction techniques are implemented, prior and after the feature pointsets fusion, and the results are duly recorded. The fused feature pointset for the database and the query face and fingerprint images are matched using techniques based on either the point pattern matching, or the Delaunay triangulation. Comparative experiments are conducted on chimeric and real databases, to assess the actual advantage of the fusion performed at the feature extraction level, in comparison to the matching score level.Comment: 6 pages, 7 figures, conferenc

    Implementasi Alignment Point Pattern pada Sistem Pengenalan Sidik Jari Menggunakan Template Matching

    Full text link
    Fingerprints is one of biometric identification system. This is because fingerprints have unique and different pattern in every human, so identification using fingerprints can no longer be doubted. But, manual fingerprint recognition by human hard to apply because of the complex pattern on it. Therefore, an accurate fingerprint matching system is needed. There are 3 steps needed for fingerprint recognition system, namely image enhancement, feature extraction, and matching. In this study, crossing number method is used as a minutiae extraction process and template matching is used for matching. We also add alignment point pattern  process added, which are ridge translation and  rotation to increase system performance. The system provide a performance of 18,54% with a matching process without alignment point pattern, and give performance of 67,40% by adding alignment point pattern process

    Feature Extraction and Matching from images / lntan Syaherra Ramli

    Get PDF
    Feature Extraction and Matching are crucial in computer vision and graphics. This is a fundamental task for many applications such as 30 point cloud reconstruction and pattern recognition. In this task, feature extraction is used to detect the unique feature point from the images. The unique features depend on the type of images captured from different types of acquisition apparatus such as jpeg, png and dicom. Then, the feature matching algorithm is used to find the corresponding feature point between two or more images. Figure 1 shows the various feature extraction and matching methods. Meanwhile, Figure 2 shows the illustration of feature extraction and matching from 20 images using the jpeg file types. In this example, the features from images were extracted using Good Feature to Track. Next, the feature points from two images are matched using Optical Flow Lucas Kanade. The corresponding feature points are used to find the 30 point cloud for computer graphic application. The 30 point cloud used to represent the real object

    Image mosaicing of panoramic images

    Get PDF
    Image mosaicing is combining or stitching several images of a scene or object taken from different angles into a single image with a greater angle of view. This is practised a developing field. Recent years have seen quite a lot of advancement in the field. Many algorithms have been developed over the years. Our work is based on feature based approach of image mosaicing. The steps in image mosaic consist of feature point detection, feature point descriptor extraction and feature point matching. RANSAC algorithm is applied to eliminate variety of mismatches and acquire transformation matrix between the images. The input image is transformed with the right mapping model for image stitching. Therefore, this paper proposes an algorithm for mosaicing two images efficiently using Harris-corner feature detection method, RANSAC feature matching method and then image transformation, warping and by blending methods

    Image Matching via Saliency Region Correspondences

    Get PDF
    We introduce the notion of co-saliency for image matching. Our matching algorithm combines the discriminative power of feature correspondences with the descriptive power of matching segments. Co-saliency matching score favors correspondences that are consistent with ’soft’ image segmentation as well as with local point feature matching. We express the matching model via a joint image graph (JIG) whose edge weights represent intra- as well as inter-image relations. The dominant spectral components of this graph lead to simultaneous pixel-wise alignment of the images and saliency-based synchronization of ’soft’ image segmentation. The co-saliency score function, which characterizes these spectral components, can be directly used as a similarity metric as well as a positive feedback for updating and establishing new point correspondences. We present experiments showing the extraction of matching regions and pointwise correspondences, and the utility of the global image similarity in the context of place recognition

    Fast and robust 3D feature extraction from sparse point clouds

    Get PDF
    Matching 3D point clouds, a critical operation in map building and localization, is difficult with Velodyne-type sensors due to the sparse and non-uniform point clouds that they produce. Standard methods from dense 3D point clouds are generally not effective. In this paper, we describe a featurebased approach using Principal Components Analysis (PCA) of neighborhoods of points, which results in mathematically principled line and plane features. The key contribution in this work is to show how this type of feature extraction can be done efficiently and robustly even on non-uniformly sampled point clouds. The resulting detector runs in real-time and can be easily tuned to have a low false positive rate, simplifying data association. We evaluate the performance of our algorithm on an autonomous car at the MCity Test Facility using a Velodyne HDL-32E, and we compare our results against the state-of-theart NARF keypoint detector. © 2016 IEEE
    corecore