7 research outputs found

    Rethinking the sGLOH descriptor

    Get PDF
    sGLOH (shifting GLOH) is a histogram-based keypoint descriptor that can be associated to multiple quantized rotations of the keypoint patch without any recomputation. This property can be exploited to define the best distance between two descriptor vectors, thus avoiding computing the dominant orientation. In addition, sGLOH can reject incongruous correspondences by adding a global constraint on the rotations either as an a priori knowledge or based on the data. This paper thoroughly reconsiders sGLOH and improves it in terms of robustness, speed and descriptor dimension. The revised sGLOH embeds more quantized rotations, thus yielding more correct matches. A novel fast matching scheme is also designed, which significantly reduces both computation time and memory usage. In addition, a new binarization technique based on comparisons inside each descriptor histogram is defined, yielding a more compact, faster, yet robust alternative. Results on an exhaustive comparative experimental evaluation show that the revised sGLOH descriptor incorporating the above ideas and combining them according to task requirements, improves in most cases the state of the art in both image matching and object recognition

    PRNU pattern alignment for images and videos based on scene content

    Get PDF
    This paper proposes a novel approach for registering the PRNU pattern between different camera acquisition modes by relying on the imaged scene content. First, images are aligned by establishing correspondences between local descriptors: The result can then optionally be refined by maximizing the PRNU correlation. Comparative evaluations show that this approach outperforms those based on brute-force and particle swarm optimization in terms of reliability, accuracy and speed. The proposed scene-based approach for PRNU pattern alignment is suitable for video source identification in multimedia forensics application

    A Spatio-Temporal Multi-Scale Binary Descriptor

    Get PDF
    Binary descriptors are widely used for multi-view matching and robotic navigation. However, their matching performance decreases considerably under severe scale and viewpoint changes in non-planar scenes. To overcome this problem, we propose to encode the varying appearance of selected 3D scene points tracked by a moving camera with compact spatio-temporal descriptors. To this end, we first track interest points and capture their temporal variations at multiple scales. Then, we validate feature tracks through 3D reconstruction and compress the temporal sequence of descriptors by encoding the most frequent and stable binary values. Finally, we determine multi-scale correspondences across views with a matching strategy that handles severe scale differences. The proposed spatio-temporal multi-scale approach is generic and can be used with a variety of binary descriptors. We show the effectiveness of the joint multi-scale extraction and temporal reduction through comparisons of different temporal reduction strategies and the application to several binary descriptors

    Is there anything new to say about SIFT matching?

    Get PDF
    SIFT is a classical hand-crafted, histogram-based descriptor that has deeply influenced research on image matching for more than a decade. In this paper, a critical review of the aspects that affect SIFT matching performance is carried out, and novel descriptor design strategies are introduced and individually evaluated. These encompass quantization, binarization and hierarchical cascade filtering as means to reduce data storage and increase matching efficiency, with no significant loss of accuracy. An original contextual matching strategy based on a symmetrical variant of the usual nearest-neighbor ratio is discussed as well, that can increase the discriminative power of any descriptor. The paper then undertakes a comprehensive experimental evaluation of state-of-the-art hand-crafted and data-driven descriptors, also including the most recent deep descriptors. Comparisons are carried out according to several performance parameters, among which accuracy and space-time efficiency. Results are provided for both planar and non-planar scenes, the latter being evaluated with a new benchmark based on the concept of approximated patch overlap. Experimental evidence shows that, despite their age, SIFT and other hand-crafted descriptors, once enhanced through the proposed strategies, are ready to meet the future image matching challenges. We also believe that the lessons learned from this work will inspire the design of better hand-crafted and data-driven descriptors

    Local features for view matching across independently moving cameras.

    Get PDF
    PhD ThesisMoving platforms, such as wearable and robotic cameras, need to recognise the same place observed from different viewpoints in order to collaboratively reconstruct a 3D scene and to support augmented reality or autonomous navigation. However, matching views is challenging for independently moving cameras that directly interact with each other due to severe geometric and photometric differences, such as viewpoint, scale, and illumination changes, can considerably decrease the matching performance. This thesis proposes novel, compact, local features that can cope with with scale and viewpoint variations. We extract and describe an image patch at different scales of an image pyramid by comparing intensity values between learnt pixel pairs (binary test), and employ a cross-scale distance when matching these features. We capture, at multiple scales, the temporal changes of a 3D point, as observed in the image sequence of a camera, by tracking local binary descriptors. After validating the feature-point trajectories through 3D reconstruction, we reduce, for each scale, the sequence of binary features to a compact, fixed-length descriptor that identifies the most frequent and the most stable binary tests over time. We then propose XC-PR, a cross-camera place recognition approach that stores locally, for each uncalibrated camera, spatio-temporal descriptors, extracted at a single scale, in a tree that is selectively updated, as the camera moves. Cameras exchange descriptors selected from previous frames within an adaptive temporal window and with the highest number of local features corresponding to the descriptors. The other camera locally searches and matches the received descriptors to identify and geometrically validate a previously seen place. Experiments on different scenarios show the improved matching accuracy of the joint multi-scale extraction and temporal reduction through comparisons of different temporal reduction strategies, as well as the cross-camera matching strategy based on Bag of Binary Words, and the application to several binary descriptors. We also show that XC-PR achieves similar accuracy but faster, on average, than a baseline consisting of an incremental list of spatio-temporal descriptors. Moreover, XC-PR achieves similar accuracy of a frame-based Bag of Binary Words approach adapted to our approach, while avoiding to match features that cannot be informative, e.g. for 3D reconstruction
    corecore