861 research outputs found

    Efficiently Tracking Homogeneous Regions in Multichannel Images

    Full text link
    We present a method for tracking Maximally Stable Homogeneous Regions (MSHR) in images with an arbitrary number of channels. MSHR are conceptionally very similar to Maximally Stable Extremal Regions (MSER) and Maximally Stable Color Regions (MSCR), but can also be applied to hyperspectral and color images while remaining extremely efficient. The presented approach makes use of the edge-based component-tree which can be calculated in linear time. In the tracking step, the MSHR are localized by matching them to the nodes in the component-tree. We use rotationally invariant region and gray-value features that can be calculated through first and second order moments at low computational complexity. Furthermore, we use a weighted feature vector to improve the data association in the tracking step. The algorithm is evaluated on a collection of different tracking scenes from the literature. Furthermore, we present two different applications: 2D object tracking and the 3D segmentation of organs.Comment: to be published in ICPRS 2017 proceeding

    Efficient Scene Text Localization and Recognition with Local Character Refinement

    Full text link
    An unconstrained end-to-end text localization and recognition method is presented. The method detects initial text hypothesis in a single pass by an efficient region-based method and subsequently refines the text hypothesis using a more robust local text model, which deviates from the common assumption of region-based methods that all characters are detected as connected components. Additionally, a novel feature based on character stroke area estimation is introduced. The feature is efficiently computed from a region distance map, it is invariant to scaling and rotations and allows to efficiently detect text regions regardless of what portion of text they capture. The method runs in real time and achieves state-of-the-art text localization and recognition results on the ICDAR 2013 Robust Reading dataset

    Rapid Online Analysis of Local Feature Detectors and Their Complementarity

    Get PDF
    A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications. © 2013 by the authors; licensee MDPI, Basel, Switzerland
    • …
    corecore