1,091 research outputs found

    A comparative evaluation of interest point detectors and local descriptors for visual SLAM

    Get PDF
    Abstract In this paper we compare the behavior of different interest points detectors and descriptors under the conditions needed to be used as landmarks in vision-based simultaneous localization and mapping (SLAM). We evaluate the repeatability of the detectors, as well as the invariance and distinctiveness of the descriptors, under different perceptual conditions using sequences of images representing planar objects as well as 3D scenes. We believe that this information will be useful when selecting an appropriat

    Local descriptors for visual SLAM

    Get PDF
    We present a comparison of several local image descriptors in the context of visual Simultaneous Localization and Mapping (SLAM). In visual SLAM a set of points in the environment are extracted from images and used as landmarks. The points are represented by local descriptors used to resolve the association between landmarks. In this paper, we study the class separability of several descriptors under changes in viewpoint and scale. Several experiments were carried out using sequences of images in 2D and 3D scenes

    A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization

    Full text link
    We propose a new algorithm for the reliable detection and localization of video copy-move forgeries. Discovering well crafted video copy-moves may be very difficult, especially when some uniform background is copied to occlude foreground objects. To reliably detect both additive and occlusive copy-moves we use a dense-field approach, with invariant features that guarantee robustness to several post-processing operations. To limit complexity, a suitable video-oriented version of PatchMatch is used, with a multiresolution search strategy, and a focus on volumes of interest. Performance assessment relies on a new dataset, designed ad hoc, with realistic copy-moves and a wide variety of challenging situations. Experimental results show the proposed method to detect and localize video copy-moves with good accuracy even in adverse conditions

    An Evaluation of Popular Copy-Move Forgery Detection Approaches

    Full text link
    A copy-move forgery is created by copying and pasting content within the same image, and potentially post-processing it. In recent years, the detection of copy-move forgeries has become one of the most actively researched topics in blind image forensics. A considerable number of different algorithms have been proposed focusing on different types of postprocessed copies. In this paper, we aim to answer which copy-move forgery detection algorithms and processing steps (e.g., matching, filtering, outlier detection, affine transformation estimation) perform best in various postprocessing scenarios. The focus of our analysis is to evaluate the performance of previously proposed feature sets. We achieve this by casting existing algorithms in a common pipeline. In this paper, we examined the 15 most prominent feature sets. We analyzed the detection performance on a per-image basis and on a per-pixel basis. We created a challenging real-world copy-move dataset, and a software framework for systematic image manipulation. Experiments show, that the keypoint-based features SIFT and SURF, as well as the block-based DCT, DWT, KPCA, PCA and Zernike features perform very well. These feature sets exhibit the best robustness against various noise sources and downsampling, while reliably identifying the copied regions.Comment: Main paper: 14 pages, supplemental material: 12 pages, main paper appeared in IEEE Transaction on Information Forensics and Securit

    Feature Extraction Methods for Character Recognition

    Get PDF
    Not Include

    Local Descriptor by Zernike Moments for Real-time Keypoint Matching

    Get PDF
    This paper presents a real-time keypoint matching algorithm using a local descriptor derived by Zernike moments. From an input image, we find a set of keypoints by using an existing corner detection algorithm. At each keypoint we extract a fixed size image patch and compute a local descriptor derived by Zernike moments. The proposed local descriptor is invariant to rotation and illumination changes. In order to speed up the computation of Zernike moments, we compute the Zernike basis functions in advance and store them in a set of lookup tables. The matching is performed with an Approximate Nearest Neighbor (ANN) method and refined by a RANSAC algorithm. In the experiments we confirmed that videos of frame size 320×240 with the scale, rotation, illumination and even 3D viewpoint changes are processed at 25~30Hz using the proposed method. Unlike existing keypoint matching algorithms, our approach also works in realtime for registering a reference image
    corecore