118,930 research outputs found

    Novel Image Mosaic Algorithm for Concrete Pavement Surface Image Reconstruction

    Get PDF
    AbstractIn this paper, a novel image mosaic method for concrete pavement surface image sequences reconstruction has been proposed. Harris corner points are extracted uniformly from the overlapped areas of concrete pavement surface images, which are considered feature points. The commonly used circular projection method is applied for coarse matching step and an improved point matching method is proposed for invariance of image rotation and distortion. The image fusion strategy of fading in and fading out is employed for the smooth and seamless of mosaic image. For the practical pavement surface images, which exists rotation and distortion, the corresponding experimental results show that the proposed image matching method has higher precision and stronger robustness

    Point extraction and matching for registration of infrared astronomical images

    Get PDF
    In this thesis, we deal with digital image sequences produced by an infrared detector array. The image sequences are characterized by large variations in noise and gray level statistics from one frame to the next. Moreover, owing to a number of defective pixels in the detector array, the frames are deliberately shifted from each other to improve the statistical properties of the signal and minimize noise. This and the inherent pointing uncertainty of the instrument result in random errors in frame positioning larger than a few pixels. This thesis discusses two point-feature based techniques to register such frames. Image registration, in general, deals with the establishment of correspondence between images of the same scene. In order to establish this correspondence, two point-feature extraction and matching algorithms have been developed and evaluated. The basic approach in both algorithms is to determine a multidimensional space of possible transformation parameters and choose a point in this space with the maximum probability of occurrence. Both algorithms make use of a localized histogram equalization and a thresholding function for feature extraction. The first relies on a priori information about the image sequences, that they are only translated from each other and not rotated, dilated or skewed, to generate a 2D space of possible shifts and a probability density function associated with this space. We then pick out the displacement with the highest probability of occurrence as our solution. The second algorithm also considers rotation by matching points in the two frames based on their relative distances from other points in their respective frames. From this initial match, we determine a 3D space of rotation and translation transformation parameters and the probability associated with each point in this space. Just as in the first algorithm, we again pick out the points that produced transformation parameters with the highest probability of occurrence

    Real-Time Feature Tracking Using Homography

    Get PDF
    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames

    Using Linear Features for Aerial Image Sequence Mosaiking

    Get PDF
    With recent advances in sensor technology and digital image processing techniques, automatic image mosaicking has received increased attention in a variety of geospatial applications, ranging from panorama generation and video surveillance to image based rendering. The geometric transformation used to link images in a mosaic is the subject of image orientation, a fundamental photogrammetric task that represents a major research area in digital image analysis. It involves the determination of the parameters that express the location and pose of a camera at the time it captured an image. In aerial applications the typical parameters comprise two translations (along the x and y coordinates) and one rotation (rotation about the z axis). Orientation typically proceeds by extracting from an image control points, i.e. points with known coordinates. Salient points such as road intersections, and building corners are commonly used to perform this task. However, such points may contain minimal information other than their radiometric uniqueness, and, more importantly, in some areas they may be impossible to obtain (e.g. in rural and arid areas). To overcome this problem we introduce an alternative approach that uses linear features such as roads and rivers for image mosaicking. Such features are identified and matched to their counterparts in overlapping imagery. Our matching approach uses critical points (e.g. breakpoints) of linear features and the information conveyed by them (e.g. local curvature values and distance metrics) to match two such features and orient the images in which they are depicted. In this manner we orient overlapping images by comparing breakpoint representations of complete or partial linear features depicted in them. By considering broader feature metrics (instead of single points) in our matching scheme we aim to eliminate the effect of erroneous point matches in image mosaicking. Our approach does not require prior approximate parameters, which are typically an essential requirement for successful convergence of point matching schemes. Furthermore, we show that large rotation variations about the z-axis may be recovered. With the acquired orientation parameters, image sequences are mosaicked. Experiments with synthetic aerial image sequences are included in this thesis to demonstrate the performance of our approach

    Video Registration in Egocentric Vision under Day and Night Illumination Changes

    Full text link
    With the spread of wearable devices and head mounted cameras, a wide range of application requiring precise user localization is now possible. In this paper we propose to treat the problem of obtaining the user position with respect to a known environment as a video registration problem. Video registration, i.e. the task of aligning an input video sequence to a pre-built 3D model, relies on a matching process of local keypoints extracted on the query sequence to a 3D point cloud. The overall registration performance is strictly tied to the actual quality of this 2D-3D matching, and can degrade if environmental conditions such as steep changes in lighting like the ones between day and night occur. To effectively register an egocentric video sequence under these conditions, we propose to tackle the source of the problem: the matching process. To overcome the shortcomings of standard matching techniques, we introduce a novel embedding space that allows us to obtain robust matches by jointly taking into account local descriptors, their spatial arrangement and their temporal robustness. The proposal is evaluated using unconstrained egocentric video sequences both in terms of matching quality and resulting registration performance using different 3D models of historical landmarks. The results show that the proposed method can outperform state of the art registration algorithms, in particular when dealing with the challenges of night and day sequences

    HPatches: A benchmark and evaluation of handcrafted and learned local descriptors

    Full text link
    In this paper, we propose a novel benchmark for evaluating local image descriptors. We demonstrate that the existing datasets and evaluation protocols do not specify unambiguously all aspects of evaluation, leading to ambiguities and inconsistencies in results reported in the literature. Furthermore, these datasets are nearly saturated due to the recent improvements in local descriptors obtained by learning them from large annotated datasets. Therefore, we introduce a new large dataset suitable for training and testing modern descriptors, together with strictly defined evaluation protocols in several tasks such as matching, retrieval and classification. This allows for more realistic, and thus more reliable comparisons in different application scenarios. We evaluate the performance of several state-of-the-art descriptors and analyse their properties. We show that a simple normalisation of traditional hand-crafted descriptors can boost their performance to the level of deep learning based descriptors within a realistic benchmarks evaluation

    CoMaL Tracking: Tracking Points at the Object Boundaries

    Full text link
    Traditional point tracking algorithms such as the KLT use local 2D information aggregation for feature detection and tracking, due to which their performance degrades at the object boundaries that separate multiple objects. Recently, CoMaL Features have been proposed that handle such a case. However, they proposed a simple tracking framework where the points are re-detected in each frame and matched. This is inefficient and may also lose many points that are not re-detected in the next frame. We propose a novel tracking algorithm to accurately and efficiently track CoMaL points. For this, the level line segment associated with the CoMaL points is matched to MSER segments in the next frame using shape-based matching and the matches are further filtered using texture-based matching. Experiments show improvements over a simple re-detect-and-match framework as well as KLT in terms of speed/accuracy on different real-world applications, especially at the object boundaries.Comment: 10 pages, 10 figures, to appear in 1st Joint BMTT-PETS Workshop on Tracking and Surveillance, CVPR 201

    Mapping, Localization and Path Planning for Image-based Navigation using Visual Features and Map

    Full text link
    Building on progress in feature representations for image retrieval, image-based localization has seen a surge of research interest. Image-based localization has the advantage of being inexpensive and efficient, often avoiding the use of 3D metric maps altogether. That said, the need to maintain a large number of reference images as an effective support of localization in a scene, nonetheless calls for them to be organized in a map structure of some kind. The problem of localization often arises as part of a navigation process. We are, therefore, interested in summarizing the reference images as a set of landmarks, which meet the requirements for image-based navigation. A contribution of this paper is to formulate such a set of requirements for the two sub-tasks involved: map construction and self-localization. These requirements are then exploited for compact map representation and accurate self-localization, using the framework of a network flow problem. During this process, we formulate the map construction and self-localization problems as convex quadratic and second-order cone programs, respectively. We evaluate our methods on publicly available indoor and outdoor datasets, where they outperform existing methods significantly.Comment: CVPR 2019, for implementation see https://github.com/janinethom
    • …
    corecore