191,412 research outputs found

    Some variants of Hausdorff distance for word matching

    Get PDF
    Several recently proposed modifications of Hausdorff distance (HD) are examined with respect to word image matching for bad quality typewritten Bulgarian text. The main idea of these approaches presumes that omission of the extreme distances between the points of the compared images eliminates the noise (to some extent) and the algorithms become more robust. A few robust HD measures, namely, censored HD, LTS-HD, and a new binary image comparison method that uses a windowed Hausdorff distance, lie in the base of the computer experiments carried out using 54 pages of typewritten text

    Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics

    Full text link
    The purpose of this study is to provide a detailed performance comparison of feature detector/descriptor methods, particularly when their various combinations are used for image-matching. The localization experiments of a mobile robot in an indoor environment are presented as a case study. In these experiments, 3090 query images and 127 dataset images were used. This study includes five methods for feature detectors (features from accelerated segment test (FAST), oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant feature transform (SIFT), and binary robust invariant scalable keypoints (BRISK)) and five other methods for feature descriptors (BRIEF, BRISK, SIFT, SURF, and ORB). These methods were used in 23 different combinations and it was possible to obtain meaningful and consistent comparison results using the performance criteria defined in this study. All of these methods were used independently and separately from each other as either feature detector or descriptor. The performance analysis shows the discriminative power of various combinations of detector and descriptor methods. The analysis is completed using five parameters: (i) accuracy, (ii) time, (iii) angle difference between keypoints, (iv) number of correct matches, and (v) distance between correctly matched keypoints. In a range of 60{\deg}, covering five rotational pose points for our system, the FAST-SURF combination had the lowest distance and angle difference values and the highest number of matched keypoints. SIFT-SURF was the most accurate combination with a 98.41% correct classification rate. The fastest algorithm was ORB-BRIEF, with a total running time of 21,303.30 s to match 560 images captured during motion with 127 dataset images.Comment: 11 pages, 3 figures, 1 tabl

    Robust Adaptive Median Binary Pattern for noisy texture classification and retrieval

    Full text link
    Texture is an important cue for different computer vision tasks and applications. Local Binary Pattern (LBP) is considered one of the best yet efficient texture descriptors. However, LBP has some notable limitations, mostly the sensitivity to noise. In this paper, we address these criteria by introducing a novel texture descriptor, Robust Adaptive Median Binary Pattern (RAMBP). RAMBP based on classification process of noisy pixels, adaptive analysis window, scale analysis and image regions median comparison. The proposed method handles images with high noisy textures, and increases the discriminative properties by capturing microstructure and macrostructure texture information. The proposed method has been evaluated on popular texture datasets for classification and retrieval tasks, and under different high noise conditions. Without any train or prior knowledge of noise type, RAMBP achieved the best classification compared to state-of-the-art techniques. It scored more than 90%90\% under 50%50\% impulse noise densities, more than 95%95\% under Gaussian noised textures with standard deviation σ=5\sigma = 5, and more than 99%99\% under Gaussian blurred textures with standard deviation σ=1.25\sigma = 1.25. The proposed method yielded competitive results and high performance as one of the best descriptors in noise-free texture classification. Furthermore, RAMBP showed also high performance for the problem of noisy texture retrieval providing high scores of recall and precision measures for textures with high levels of noise

    Clinical feasibility of quantitative ultrasound texture analysis: A robustness study using fetal lung ultrasound images

    Get PDF
    OBJECTIVES: To compare the robustness of several methods based on quantitative ultrasound (US) texture analysis to evaluate its feasibility for extracting features from US images to use as a clinical diagnostic tool. METHODS: We compared, ranked, and validated the robustness of 5 texture-based methods for extracting textural features from US images acquired under different conditions. For comparison and ranking purposes, we used 13,171 non-US images from widely known available databases (OUTEX [University of Oulu, Oulu, Finland] and PHOTEX [Texture Lab, Heriot-Watt University, Edinburgh, Scotland]), which were specifically acquired under different controlled parameters (illumination, resolution, and rotation) from 103 textures. The robustness of those methods with better results from the non-US images was validated by using 666 fetal lung US images acquired from singleton pregnancies. In this study, 2 similarity measurements (correlation and Chebyshev distances) were used to evaluate the repeatability of the features extracted from the same tissue images. RESULTS: Three of the 5 methods (gray-level co-occurrence matrix, local binary patterns, and rotation-invariant local phase quantization) had favorably robust performance when using the non-US database. In fact, these methods showed similarity values close to 0 for the acquisition variations and delineations. Results from the US database confirmed robustness for all of the evaluated methods (gray-level co-occurrence matrix, local binary patterns, and rotation-invariant local phase quantization) when comparing the same texture obtained from different regions of the image (proximal/distal lungs and US machine brand stratification). CONCLUSIONS: Our results confirmed that texture analysis can be robust (high similarity for different condition acquisitions) with potential to be included as a clinical tool

    Shape Similarity Measurement for Known-Object Localization: A New Normalized Assessment

    Get PDF
    International audienceThis paper presents a new, normalized measure for assessing a contour-based object pose. Regarding binary images, the algorithm enables supervised assessment of known-object recognition and localization. A performance measure is computed to quantify differences between a reference edge map and a candidate image. Normalization is appropriate for interpreting the result of the pose assessment. Furthermore, the new measure is well motivated by highlighting the limitations of existing metrics to the main shape variations (translation, rotation, and scaling), by showing how the proposed measure is more robust to them. Indeed, this measure can determine to what extent an object shape differs from a desired position. In comparison with 6 other approaches, experiments performed on real images at different sizes/scales demonstrate the suitability of the new method for object-pose or shape-matching estimation

    Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition and Remote Sensing Scene Classification

    Full text link
    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The d facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Binary Patterns encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Our final combination outperforms the state-of-the-art without employing fine-tuning or ensemble of RGB network architectures.Comment: To appear in ISPRS Journal of Photogrammetry and Remote Sensin
    corecore