4 research outputs found

    Analysis of feature detector and descriptor combinations with a localization experiment for various performance metrics

    Full text link
    The purpose of this study is to provide a detailed performance comparison of feature detector/descriptor methods, particularly when their various combinations are used for image-matching. The localization experiments of a mobile robot in an indoor environment are presented as a case study. In these experiments, 3090 query images and 127 dataset images were used. This study includes five methods for feature detectors (features from accelerated segment test (FAST), oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB), speeded-up robust features (SURF), scale invariant feature transform (SIFT), and binary robust invariant scalable keypoints (BRISK)) and five other methods for feature descriptors (BRIEF, BRISK, SIFT, SURF, and ORB). These methods were used in 23 different combinations and it was possible to obtain meaningful and consistent comparison results using the performance criteria defined in this study. All of these methods were used independently and separately from each other as either feature detector or descriptor. The performance analysis shows the discriminative power of various combinations of detector and descriptor methods. The analysis is completed using five parameters: (i) accuracy, (ii) time, (iii) angle difference between keypoints, (iv) number of correct matches, and (v) distance between correctly matched keypoints. In a range of 60{\deg}, covering five rotational pose points for our system, the FAST-SURF combination had the lowest distance and angle difference values and the highest number of matched keypoints. SIFT-SURF was the most accurate combination with a 98.41% correct classification rate. The fastest algorithm was ORB-BRIEF, with a total running time of 21,303.30 s to match 560 images captured during motion with 127 dataset images.Comment: 11 pages, 3 figures, 1 tabl

    Simulation of greyscale image colouring using blob detection

    Get PDF
    Automatic colouring of greyscale images using computer is one of the important fields in digital image processing. It helps to produce more appealing visuals to human eye when one have to deal with medical images, night vision cameras or scientific illustrations. However, to produce images that are at par with the ability of human eyes, computerised colouring process takes a lot of time and ample calculation. Recent years, blob detection has shown a good development for finding features in an image. This method not only can run on low memory devices but also provides users with faster calculation. Encouraged by these advantages – work on low memory devices and enable faster calculation, two models of untrained colouring of greyscale images are proposed in this study. The maximum number of blob features is examined using Centre Surround Extremas (CenSurE) and Binary Robust Independent Elementary Features (BRIEF). The result of this study proves that the images coloured by these models look better with increment features of the key point if the minimum matching distance is as low as possible. In addition, when comparing feature descriptors using Fast Retina Keypoint (FREAK) solely and FREAK together with Speeded-Up Robust Features (SURF), it is concluded that the result is getting better with the decrement of minimum Hessian in the image. This experiment leads to the discovery that the selection of feature descriptors will influence the result of colouring

    Robust ego-localization using monocular visual odometry

    Get PDF

    Prioritizing Content of Interest in Multimedia Data Compression

    Get PDF
    Image and video compression techniques make data transmission and storage in digital multimedia systems more efficient and feasible for the system's limited storage and bandwidth. Many generic image and video compression techniques such as JPEG and H.264/AVC have been standardized and are now widely adopted. Despite their great success, we observe that these standard compression techniques are not the best solution for data compression in special types of multimedia systems such as microscopy videos and low-power wireless broadcast systems. In these application-specific systems where the content of interest in the multimedia data is known and well-defined, we should re-think the design of a data compression pipeline. We hypothesize that by identifying and prioritizing multimedia data's content of interest, new compression methods can be invented that are far more effective than standard techniques. In this dissertation, a set of new data compression methods based on the idea of prioritizing the content of interest has been proposed for three different kinds of multimedia systems. I will show that the key to designing efficient compression techniques in these three cases is to prioritize the content of interest in the data. The definition of the content of interest of multimedia data depends on the application. First, I show that for microscopy videos, the content of interest is defined as the spatial regions in the video frame with pixels that don't only contain noise. Keeping data in those regions with high quality and throwing out other information yields to a novel microscopy video compression technique. Second, I show that for a Bluetooth low energy beacon based system, practical multimedia data storage and transmission is possible by prioritizing content of interest. I designed custom image compression techniques that preserve edges in a binary image, or foreground regions of a color image of indoor or outdoor objects. Last, I present a new indoor Bluetooth low energy beacon based augmented reality system that integrates a 3D moving object compression method that prioritizes the content of interest.Doctor of Philosoph
    corecore