50,507 research outputs found

    Boosted Ringlet Features for Visual Object Tracking

    Get PDF
    Accurate and efficient object tracking is an important aspect in security and surveillance applications. Many challenges exist in visual object tracking including scale change, object distortion, lighting change, and occlusion. The combination of structural target information including edge features with the intensity or color features will allow for more robust object tracking in these conditions. To achieve this, we propose a feature extraction method that utilizes both the Frei-Chen edge detector and Gaussian ringlet feature mapping. Frei-Chen edge detector extracts edge, line, and mean features that can be combined to create an edge detection image. The edge detection image will then be used to represent the structural features of the target. Gaussian ringlet feature mapping is used to obtain rotational invariant features that are robust to target and viewpoint rotation. These aspects are combined to create an efficient and robust tracking scheme. The proposed method also includes occlusion and scale handling components. The proposed scheme is evaluated against state-of-the-art feature tracking methods using both temporal and spatial robustness metrics on the Visual Object Tracking 2014 database.https://ecommons.udayton.edu/stander_posters/2024/thumbnail.jp

    Rotation-invariant features for multi-oriented text detection in natural images.

    Get PDF
    Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes

    Registration and Fusion of Multi-Spectral Images Using a Novel Edge Descriptor

    Full text link
    In this paper we introduce a fully end-to-end approach for multi-spectral image registration and fusion. Our method for fusion combines images from different spectral channels into a single fused image by different approaches for low and high frequency signals. A prerequisite of fusion is a stage of geometric alignment between the spectral bands, commonly referred to as registration. Unfortunately, common methods for image registration of a single spectral channel do not yield reasonable results on images from different modalities. For that end, we introduce a new algorithm for multi-spectral image registration, based on a novel edge descriptor of feature points. Our method achieves an accurate alignment of a level that allows us to further fuse the images. As our experiments show, we produce a high quality of multi-spectral image registration and fusion under many challenging scenarios

    Fast Shadow Detection from a Single Image Using a Patched Convolutional Neural Network

    Full text link
    In recent years, various shadow detection methods from a single image have been proposed and used in vision systems; however, most of them are not appropriate for the robotic applications due to the expensive time complexity. This paper introduces a fast shadow detection method using a deep learning framework, with a time cost that is appropriate for robotic applications. In our solution, we first obtain a shadow prior map with the help of multi-class support vector machine using statistical features. Then, we use a semantic- aware patch-level Convolutional Neural Network that efficiently trains on shadow examples by combining the original image and the shadow prior map. Experiments on benchmark datasets demonstrate the proposed method significantly decreases the time complexity of shadow detection, by one or two orders of magnitude compared with state-of-the-art methods, without losing accuracy.Comment: 6 pages, 5 figures, Submitted to IROS 201
    • …
    corecore