23,445 research outputs found

    Fast and effective method for video mosaic

    Get PDF
    针对现有的视频图像序列拼接方法处理速度慢的问题,提出一种基于Surf特征的快速有效的拼接算法。该算法用鲁棒性强且计算性能优越的Surf算子取代传统的SIfT算子进行特征点提取;在特征点匹配方面,提出了一种基于哈希映射和双向最近邻距离比的匹配算法,可以快速有效地获得特征点间的对应关系。为了消除由于运动物体干扰带来的误匹配,采用随机采样一致性(rAnSAC)方法来消除外点确保匹配的有效性,再通过最小二乘法估计视频帧之间的全局运动参数,最终拼接形成全景图。实验结果表明,该拼接算法快速有效,鲁棒性强,具有较高的使用价值。As the existing methods for video mosaic take high computational costs,a fast and effective algorithm based on SURF feature for video mosaic is proposed.The algorithm uses SURF method with strong robustness and superior performance to extract feature instead of SIFT.At the aspect of feature matching,a novel matching scheme based on hash mapping and bidirectional nearest neighbor distance ratio is presented,which can quickly and effectively obtain the relationship between the features.In order to exclude the error matchings,a RANSAC technique is applied to eliminate outliers to ensure effectiveness of the matched pairs,and then the global motion parameters are estimated by a least-squares solution,finally panorama from video sequence is achieved using the parameters.Experimental results show that the method with strong robustness performs fast and effectively and has highly valuable in practice

    Registration for Optical Multimodal Remote Sensing Images Based on FAST Detection,Window Selection, and Histogram Specification

    Get PDF
    In recent years, digital frame cameras have been increasingly used for remote sensing applications. However, it is always a challenge to align or register images captured with different cameras or different imaging sensor units. In this research, a novel registration method was proposed. Coarse registration was first applied to approximately align the sensed and reference images. Window selection was then used to reduce the search space and a histogram specification was applied to optimize the grayscale similarity between the images. After comparisons with other commonly-used detectors, the fast corner detector, FAST (Features from Accelerated Segment Test), was selected to extract the feature points. The matching point pairs were then detected between the images, the outliers were eliminated, and geometric transformation was performed. The appropriate window size was searched and set to one-tenth of the image width. The images that were acquired by a two-camera system, a camera with five imaging sensors, and a camera with replaceable filters mounted on a manned aircraft, an unmanned aerial vehicle, and a ground-based platform, respectively, were used to evaluate the performance of the proposed method. The image analysis results showed that, through the appropriate window selection and histogram specification, the number of correctly matched point pairs had increased by 11.30 times, and that the correct matching rate had increased by 36%, compared with the results based on FAST alone. The root mean square error (RMSE) in the x and y directions was generally within 0.5 pixels. In comparison with the binary robust invariant scalable keypoints (BRISK), curvature scale space (CSS), Harris, speed up robust features (SURF), and commercial software ERDAS and ENVI, this method resulted in larger numbers of correct matching pairs and smaller, more consistent RMSE. Furthermore, it was not necessary to choose any tie control points manually before registration. The results from this study indicate that the proposed method can be effective for registering optical multimodal remote sensing images that have been captured with different imaging sensors

    SenseCam image localisation using hierarchical SURF trees

    Get PDF
    The SenseCam is a wearable camera that automatically takes photos of the wearer's activities, generating thousands of images per day. Automatically organising these images for efficient search and retrieval is a challenging task, but can be simplified by providing semantic information with each photo, such as the wearer's location during capture time. We propose a method for automatically determining the wearer's location using an annotated image database, described using SURF interest point descriptors. We show that SURF out-performs SIFT in matching SenseCam images and that matching can be done efficiently using hierarchical trees of SURF descriptors. Additionally, by re-ranking the top images using bi-directional SURF matches, location matching performance is improved further

    Sparse optical flow regularisation for real-time visual tracking

    Get PDF
    Optical flow can greatly improve the robustness of visual tracking algorithms. While dense optical flow algorithms have various applications, they can not be used for real-time solutions without resorting to GPU calculations. Furthermore, most optical flow algorithms fail in challenging lighting environments due to the violation of the brightness constraint. We propose a simple but effective iterative regularisation scheme for real-time, sparse optical flow algorithms, that is shown to be robust to sudden illumination changes and can handle large displacements. The algorithm proves to outperform well known techniques in real life video sequences, while being much faster to calculate. Our solution increases the robustness of a real-time particle filter based tracking application, consuming only a fraction of the available CPU power. Furthermore, a new and realistic optical flow dataset with annotated ground truth is created and made freely available for research purposes
    corecore