185,177 research outputs found

    On using Feature Descriptors as Visual Words for Object Detection within X-ray Baggage Security Screening

    Get PDF
    Here we explore the use of various feature point descriptors as visual word variants within a Bag-of-Visual-Words (BoVW) representation scheme for image classification based threat detection within baggage security X-ray imagery. Using a classical BoVW model with a range of feature point detectors and descriptors, supported by both Support Vector Machine (SVM) and Random Forest classification, we illustrate the current performance capability of approaches following this image classification paradigm over a large X-ray baggage imagery data set. An optimal statistical accuracy of 0.94 (true positive: 83%; false positive: 3.3%) is achieved using a FAST-SURF feature detector and descriptor combination for a firearms detection task. Our results indicate comparative levels of performance for BoVW based approaches for this task over extensive variations in feature detector, feature descriptor, vocabulary size and final classification approach. We further demonstrate a by-product of such approaches in using feature point density as a simple measure of image complexity available as an integral part of the overall classification pipeline. The performance achieved characterises the potential for BoVW based approaches for threat object detection within the future automation of X-ray security screening against other contemporary approaches in the field

    Comparing effectiveness of feature detectors in obstacles detection from video

    Get PDF
    We have already proposed an obstacles detection method using a video taken by a vehicle-mounted monocular camera. In this method, correct obstacles detection depends on whether we can accurately detect and match feature points. In order to improve the accuracy of obstacles detection, in this paper, we make comparison among four most commonly used feature detectors; Harris, SIFT, SURF and FAST detectors. The experiments are done using our obstacles detection method. The experimental results are compared and discussed, and then we find the most suitable feature point detector for our obstacles detection method

    Comparing effectiveness of feature detectors in obstacles detection from video

    Get PDF
    We have already proposed an obstacles detection method using a video taken by a vehicle-mounted monocular camera. In this method, correct obstacles detection depends on whether we can accurately detect and match feature points. In order to improve the accuracy of obstacles detection, in this paper, we make comparison among four most commonly used feature detectors; Harris, SIFT, SURF and FAST detectors. The experiments are done using our obstacles detection method. The experimental results are compared and discussed, and then we find the most suitable feature point detector for our obstacles detection method

    The brightness clustering transform and locally contrasting keypoints

    No full text
    In recent years a new wave of feature descriptors has been presented to the computer vision community, ORB, BRISK and FREAK amongst others. These new descriptors allow reduced time and memory consumption on the processing and storage stages of tasks such as image matching or visual odometry, enabling real time applications. The problem is now the lack of fast interest point detectors with good repeatability to use with these new descriptors. We present a new blob- detector which can be implemented in real time and is faster than most of the currently used feature-detectors. The detection is achieved with an innovative non-deterministic low-level operator called the Brightness Clustering Transform (BCT). The BCT can be thought as a coarse-to- fine search through scale spaces for the true derivative of the image; it also mimics trans-saccadic perception of human vision. We call the new algorithm Locally Contrasting Keypoints detector or LOCKY. Showing good repeatability and robustness to image transformations included in the Oxford dataset, LOCKY is amongst the fastest affine-covariant feature detectors

    Recognition of food with monotonous appearance using speeded-up robust feature (SURF)

    Get PDF
    Food has become one of the most photographed objects since the inceptions of smart phones and social media services. Recently, the analysis of food images using object recognition techniques have been investigated to recognize food categories. It is a part of a framework to accomplish the tasks of estimating food nutrition and calories for health-care purposes. The initial stage of food recognition pipeline is to extract the features in order to capture the food characteristics. A local feature by using SURF is among the efficient image detector and descriptor. It is using fast hessian detector to locate interest points and haar wavelet for descriptions. Despite the fast computation of SURF extraction, the detector seems ineffective as it obviously detects quite a small volume of interest points on the food objects with monotonous appearance. It occurs due to 1) food has texture-less surface 2) image has small pixel dimensions, and 3) image has low contrast and brightness. As a result, the characteristics of these images that were captured are clueless and lead to low classification performance. This problem has been manifested through low production of interest points. In this paper, we propose a technique to detect denser interest points on monotonous food by increasing the density of blobs in fast hessian detector in SURF. We measured the effect of this technique by performing a comparison on SURF interest points detection by using different density of blobs detection. SURF is encoded by using Bag of Features (BoF) model and Support Vector Machine (SVM) with linear kernel adopted for classification. The findings has shown the density of interest point detection has prominent effect on the interest points detection and classification performance on the respective food categories with 86% classification accuracy on UEC100-Food dataset
    corecore